How do I add query hint in this insert SQL? - sql

After our SQL Server upgraded from 2012 to 2014, some stored procedure's performance has decreased.
We found that query hint can be used to solve this problem, however I wonder how can we add query hint in a SQL of following structure:
INSERT INTO TABLE1 ....
SELECT ...
FROM TABLE2
--OPTION (QUERYTRACEON 9481)
If I simply add it at the end of the snippet (the commented line), is it affecting the SELECT query, the INSERT query, or both? How about if I would like to use query hint on SELECT or INSERT only?

If you use this trace flag, what you are doing is that you instruct the query optimizer to use the 2012 cardinality estimator.
You can do this on query level as stated in the documentation:
If a trace flag affects any query execution plan in an unwanted way,
but improves some other query execution plan, you may want to enable a
corresponding trace flag for only a particular query.
If you use it as you are doing it now, it will affect the whole INSERT INTO SELECT statement, but if you only want one of the queries to be executed using the old cardinality estimator, you need to change your statement into two different queries (for example insert data into a temp table and then select into your table) so that you only use the trace on one of those queries.
You can read more about this also on this StackOverflow post.

Trace flag 9481 forces the query optimizer to use version 70 (the SQL Server 2012 version) of the cardinality estimator when creating the query plan (msdn)
It certainly affects all the operates in your query, both insert and select, but what could you expect to be different in the insert operator?
The cardinality estimation affects join strategies, and insert does not perform ani scan/seek/join

Related

SQL CE parameter passing

I have a number of queries which are checking for a record in a table and if there is no such a record it is added. There are 2 queries involved in this process:
1) select id from table where <conditions>
if there is any ids corresponding to I do nothing(or update records with ids as I want), if there are no ids I executing second query:
2) insert into table(<columns>) values(<values>)
Of course <conditions>, <columns> and <values> are correct strings for their context.
What I want is to join these 2 queries into one:
insert into table(<values>)
select <parameter list>
where not exists(select id from table where <conditions>)
So there will be only one query involved instead of 2.
The example of this query is:
insert into persons(name) select #Name where not exists (select id from persons where name = #Name)
The problem is that I use parameters for queries and when I execute this composite query I get an exception saying "A parameter is not allowed in this location. Ensure that the '#' sign is in a valid location or that parameters are valid at all in this SQL statement."
Is there a way to bypass parameters and don't get an exception for this query?
I'm using C# and SqlCe-related classes.
What's the benefit of merging the queries into a composite? If the query is supported (and I actually doubt that the SQL Compact Query Parser supports this) the sql engine is still going to have to essentially do both queries anyway. I find the two separate queries more readable.
As separate queries, I'm also willing to bet you could make it substantially faster than the composite query anyway by using TableDirect. Take a look at the answer to this question and benchmark it in your app. It's likely 10x faster than your existing query pair becasue it foregoes the query parser altogether.

Do CTEs use any space in tempdb?

Do CTEs use any space in tempdb or does it use memory exclusively?
I've tagged the question with both mssql 2005 and 2008 as I use both.
I'll try not to copy/paste MSDN
It doesn't matter.
A CTE is independent of query execution: it is only a language construct. Think of it as neat derived table or subquery.
This means that except for recursive CTEs (see later), all CTEs can be coded inline. If you use the CTE code once, it is for readability. If you use the CTE twice or more, then it is defensive: you don't want to make a mistake and have the derived table different each use.
Where a CTE is used twice or more, then that code will be executed twice or more. It won't be executed once and cached in tempdb.
Summary: it may or may not, just like if the code was inline.
Note: a recursve CTE is simply a derived table inside a derived table inside a derived table inside a a derived table inside a der... so same applies.
You can see this in Tony Rogerson's article. The use of tempdb would happen anyway if coded inline. He also notes that using a temp table can be better because of the "macro" expansion I explained above
FYI: the same applies to views. Just macros.
A common table expression can be thought of as a temporary result set
that is defined within the execution scope of a single SELECT, INSERT,
UPDATE, DELETE, or CREATE VIEW statement. When the query plan for a
common table expression query uses a spool operator to save
intermediate query results, the Database Engine creates a work table
in tempdb to support this operation.
source
From MSDN: http://msdn.microsoft.com/en-us/library/ms345368.aspx
A common table expression can be thought of as a temporary result set that is defined within the execution scope of a single SELECT, INSERT, UPDATE, DELETE, or CREATE VIEW statement.
When the query plan for a common table expression query uses a spool operator to save intermediate query results, the Database Engine creates a work table in tempdb to support this operation.

Debugging sub-queries in TSQL Stored Procedure

How do I debug a complex query with multiple nested sub-queries in SQL Server 2005?
I'm debugging a stored procedure and trigger in Visual Studio 2005. I'd like to be able to see what the results of these sub-queries are, as I feel that this is where the bug is coming from. An example query (slightly redacted) is below:
UPDATE
foo
SET
DateUpdated = ( SELECT TOP 1 inserted.DateUpdated FROM inserted )
...
FROM
tblEP ep
JOIN tblED ed ON ep.EnrollmentID = ed.EnrollmentID
WHERE
ProgramPhaseID = ( SELECT ...)
Visual Studio doesn't seem to offer a way for me to Watch the result of the sub query. Also, if I use a temporary table to store the results (temporary tables are used elsewhere also) I can't view the values stored in that table.
Is there anyway that I can add a watch or in some other way view these sub-queries? I would love it if there was some way to "Step Into" the query itself, but I imagine that wouldn't be possible.
Ok first I would be leary of using subqueries in a trigger. Triggers should be as fast as possible, so get rid of any correlated subqueries which might run row by row instead of in a set-based fashion. Rewrite to joins. If you only want to update records based on what was in the inserted table, then join to it. Also join to the table you are updating. Exactly what are you trying to accomplish with this trigger? It might be easier to give advice if we understood the business rule you are trying to implement.
To debug a trigger this is what I do.
I write a script to:
Do the actual insert to the table
without the trigger on on it
Create a temp table named #inserted
(and/or one named #deleted)
Populate the table as I would expect
the inserted table in the trigger to
be populated from the insert you do.
Add the trigger code (minus the
create or alter trigger parts)
substituting #inserted every time I
reference inserted. (if you plan to
run multiple times until you are
ready to use it in a trigger throw
it in an explicit transaction and
rollback after checking your
results.
Add a query to check the table(s)
you are changing with the trigger for
the values you wanted to change.
Now if you need to add debug
statements to see what is happening
between steps, you can do so.
Run making changes until you get the
results you want.
Once you have the query working as
you expect it to, it is easy to take
the # signs off inserted and use it
to create the body of the trigger.
This is what I usually do in this type of scenerio:
Print out the exact sqls getting generated by each subquery
Then run each of then in the Management Studio as suggested above.
You should check if different parts are giving you the right data you expect.

Upsert (update or insert) in Sybase ASE?

I'm writing an application to move data from Oracle to Sybase and need to perform update / insert operations. In Oracle, I'd use MERGE INTO, but it doesn't seem to be available in Sybase (not in ASE, anyway). I know this can be done with multiple statements, but for a couple of reasons, I'm really trying to get this into a single statement.
Any suggestions?
ASE 15.7 has this feature.
Find the docs here:
http://infocenter.sybase.com/help/topic/com.sybase.infocenter.dc36272.1570/html/commands/commands84.htm
Sybase and DB2 are very IEC/ISO/ANSI SQL Standrd-compliant. MS a little less so.
Oracle is not very Standard-compliant at all (despite what the glossies say). More important, due to it limitations, the method they use to overcome them is to introduce Extensions to SQL (which are not required for the others DBMS, which do not have the limitations). Nice way of making sure that customers do not migrate away.
So the best advice for you is to learn the SQL Standard way of doing whatever you were doing on the Oracle side. And second (not first) learn about Sybases or DB2s (or whatever) Extensions.
"MERGE" and "UPSERT" do not exist in SQL, they exist in Oracle only. The bottom line is, you have to UPDATE and INSERT in two separate operations.
In SQL, UPDATE and INSERT apply to a single table; you may have quite complex FROM clauses.
For "MERGE", that is simply an:
INSERT target ( column_list ) -- we do have defaults
SELECT ( column_list )
FROM source
WHERE primary_key NOT IN ( SELECT primary_key FROM target )
Update is simply the complement:
UPDATE target SET ( target_column = source_column, ... )
FROM source
WHERE primary_key IN ( SELECT primary_key FROM target )
In the UPDATE it is easy to merge the WHERE conditions and eliminate the Subquery (I am showing it to you for explanation).
As I understand it, Oracle is abyssmal at executing Subqueries (Standard SQL). Which is why they have all these non-Standard "MERGE", etc., the purpose of which is to avoid the Standard Subquery syntax, which every other DBMS performs with ease.
unfortunately, it is impossible to insert and update a table in one statement without using MERGE. which btw does exist in SQL as of SQL:2008, according to this article anyway, and supported by almost all major databases, except Sybase ASE and PostgreSQL.
Merge exists in SAP ASE 15.7 upwards, as mentioned here and here
Replace / Upsert exists in SAP ASE 16.0 and up.
You'll need to update to access them.
Maybe it could work. Tested in ASA9.
insert into my_table (columns) on existing update values (values);
May be you could try to fake it with INSERT INTO and/or UPDATE FROM with some sub-queries but it will not be as convenient as Oracle does.
You wanna do this into code or data warehouse ? because you could also encapsulate all the SQL into a stored procedure if you want to hide the complexity of the queries.

Table-Valued Parameter versus multiple Insert rows performance question for inserting into SQL Server 2008

If all I'm doing is inserting multiple rows of data into a single table in SQL Server 2008, which is faster?
A Table-Valued Parameter or a single insert statement with multiple values?
Where in this simple scenario would you prefer one over the other?
If I understand the question correctly, I'd go with the Table-Valued Parameter. Otherwise I'd think the list of parameters would quickly become unmanageable. You wouldn't want to end up with something like:
insert into YourTable
(col1, col2, ..., colN)
values
(#Row1Col1, #Row1Col2, ..., #Row1ColN),
(#Row2Col1, #Row2Col2, ..., #Row2ColN),
...
(#RowMCol1, #RowMCol2, ..., #RowMColN)
Since both operations will insert data into a table the question becomes; "Is the overhead of using a stored procedure to make table inserts to much for my system to handle?"
Does your system allow direct inserts into tables from the app now? If yes then just go with the direct insert.
I prefer to use stored procs as it allows me to add auditing, error logic, etc which just makes me feel better than dumping data directly into a table.