Why do I get these different results from two SQL queries? - sql

This has been killing me all day =D. Please help!
Scenario 1: Two DB's on the same server (A, B) and A has three tables. Query is executed from B.
Scenario 2: Two DB's one server is linked to the other. (B has link A) and A has three tables. Query is executed from B.
In scenario 1 (non linked server):
SET #val = ''
SELECT #val = #val + 'Hello, my name is ' + [name] + '!' + CHAR(10) + CHAR(13)
FROM A.sys.tables
Returns:
Hello, my name is Table1!
Hello, my name is Table2!
Hello, my name is Table3!
In scenario 2 (linked server):
SET #val = ''
SELECT #val = #val + 'Hello, my name is ' + [name] + '!' + CHAR(10) + CHAR(13)
FROM LINKED.A.sys.tables
Returns:
Hello, my name is Table3!
Why are these different? If I use openquery() on the linked server the results are the same as scenario 1. I'm trying to avoid using openquery() if possible. Thanks!

Unfortunately, this is an unreliable method of string concatenation in SQL Server. I would avoid it in all but the most trivial of cases. There is some more information in this KB: Execution Plan and Results of Aggregate Concatenation Queries Depend Upon Expression Location.
That said, I was able to both duplicate your problem and provide a workaround in my environment:
SET #val = ''
SELECT #val = #val + 'Hello, my name is ' + replace([name], '', '') + '!' + CHAR(10) + CHAR(13)
FROM LINKED.A.sys.tables
Notice that I've added an empty replace function to the expression. Though it should do nothing to the output, it does add a local "compute scalar" step to the query plan. This seems to pull back all of the data from the name column to then be processed locally rather than just letting the remote query return what it thinks is needed.
I'm not sure if there's a better function to use other than a replace with empty arguments. Perhaps a double reverse or something. Just be sure to cast to a max datatype if necessary as the documentation states.
UPDATE
Simply declaring #var as varchar(max) rather than nvarchar(max) clears up the problem, as it then brings back the entire name column (type sysname -- or nvarchar(128) -- I believe) for local processing just like the replace function did. I cannot pretend to know which combination of linked server settings and implicit casting causes this to come up. Hopefully someone with more knowledge in this area can chime-in!

Related

Find rows containing delimited words within nvarchar parameter

I have a procedure that selects an offset of rows from a table:
SELECT * --table contains ID and Name columns
FROM Names
ORDER BY ID
OFFSET #Start ROWS
FETCH NEXT #Length ROWS ONLY
In addition to #Start and #Length parameters, the procedure also receives #SearchValue NVARCHAR(255) parameter. #SearchValue contains a string of values delimited by a space, for example '1 ik mi' or 'Li 3'.
What I need is to query every record containing all of those values. So, if the #SearchValue is '1 ik mi', it should return any records that contain all three values: '1', 'mi', and 'ik'. Another way to understand this is by going here, searching the table (try searching 00 eer 7), and observing the filtered results.
I have the freedom to change the delimiter or run some function (in C#, in my case) that could format an array of those words.
Below are our FAILED attempts (we didn't try implementing it with OFFSET yet):
Select ID, Name
From Names
Where Cast(ID as nvarchar(255)) in (Select value from string_split(#SearchValue, ' ')) AND
Name in (Select value from string_split(#SearchValue, ' '))
SELECT ID, Name
FROM Names
WHERE #SearchValueLIKE '% ' + CAST(ID AS nvarchar(20)) + ' %' AND
#SearchValueLIKE '% ' + Name + ' %';
We used Microsoft docs on string_split for the ideas above.
Tomorrow, I will try to implement this solution, but I'm wondering if there's another way to do this in case that one doesn't work. Thank you!
Your best bet will be to use a FULL TEXT index. This is what they're built for.
Having said that you can work around it.. BUT! You're going to be building a query to do it. You can either build the query in C# and fire it at the database, or build it in the database. However, you're never going to be able to optimise the query very well because users being users could fire all sorts of garbage into your search that you'll need to watch out for, which is obviously a topic for another discussion.
The solution below makes use of sp_executesql, so you're going to have to watch out for SQL injection (before someone else picks apart this whole answer just to point out SQL injection):
DROP TABLE #Cities;
CREATE TABLE #Cities(id INTEGER IDENTITY PRIMARY KEY, [Name] VARCHAR(100));
INSERT INTO #Cities ([Name]) VALUES
('Cooktown'),
('South Suzanne'),
('Newcastle'),
('Leeds'),
('Podunk'),
('Udaipur'),
('Delhi'),
('Murmansk');
DECLARE #SearchValue VARCHAR(20) = 'ur an rm';
DECLARE #query NVARCHAR(1000);
SELECT #query = COALESCE(#query + '%'' AND [Name] LIKE ''%', '') + value
FROM (Select value from string_split(#SearchValue, ' ')) a;
SELECT #query = 'SELECT * FROM #Cities WHERE [Name] LIKE ''%' + #query + '%''';
EXEC sp_executesql #query;

Display results of dynamic SQL stored in a table SQL Server 2012

I have a table that's being used by and application to set some variables over there.
However one of the things that it does is creating files, and the file names and directories are saved in the tables.
but i want the file name to be an expression like CONVERT(VARCHAR(10),GETDATE(),20) + '.txt' so I'd like to have this saved in the table as a SQL Expression and then evaluate it to get the value 2012-03-27.txt out of it.
Does anyone have any idea how to do it?
Dynamic SQL SP might work but it'll be way complicated, is there any other way?
Addition
A computed column won't work because i want different expressions on each row.
if there would such a thing that i can have a computed column where i can enter a different value in each line would be awesome!
Just put it in as a computed column in the table definition with the expression for the column having exactly the code you supplied, and you should be all set!
Well, not likely to be incredibly efficient, but here's one way:
DECLARE #x TABLE(sql NVARCHAR(2000));
INSERT #x(sql) VALUES
('CONVERT(VARCHAR(10),GETDATE(),20) + ''.txt'''),
('CONVERT(CHAR(8), GETDATE(), 112) + ''.sql''');
DECLARE #sql NVARCHAR(MAX) = N'';
SELECT #sql += ' UNION ALL SELECT ' + sql FROM #x;
SET #sql = STUFF(#sql, 1, 11, '');
EXEC sp_executesql #sql;
Results:
2013-02-27.txt
20130227.sql
Solved!
In my perticular case the "Application" is an SSIS package, so what i did is very simple,
I grab the value from the table into a variable #User::srcFile
I make an execute SQL task, setting the sql statement expression to "SELECT " + #[User::srcFile] + " AS FileName"
At the result set section i assigned the FileName to User::srcFile
Thanks everyone for your help!

T-SQL Newline String Concatenation For Exec()

So, I've come across a pretty annoying problem with T-SQL... Essentially, in a table, there are several T-SQL statements. I want a stored procedure to efficiently grab these rows and concatenate them to a variable. Then, execute the concatenated lines with EXEC(#TSQL)
The problem is, string concatenation with newlines seem to be stripped when calling Exec...
For example something like:
declare #sql nvarchar(max) = ''
select #sql += char(13) + char(10) + [sql] from [SqlTable]
exec(#sql) -- Won't always do the right thing
print #sql -- Looks good
I don't want to butcher the code with a Cursor, is there any way around this? Thanks!
Edit:
Okay, so it looks like the issue is in fact only with the GO statement, for example:
declare #test nvarchar(max) = 'create table #test4(i int) ' + char(10) + char(13) + 'GO' + char(10) + char(13) +'create table #test5(i int)'
exec(#test)
I guess this go will have to go (no pun intended) I just really didn't want to have to try and parse it in fear of special cases blowing up the whole thing.
A select statement without order by is free to return results in any order.
You'd have to specify the order in which your SQL snippets make sense:
select #sql += char(13) + char(10) + [sql]
from [SqlTable]
order by
SnippetSequenceNr
As #Bort suggested in the comments, if the snippets are stand-alone SQL, you can separate them with a semicolon. Carriage returns, newlines, tabs and spaces are all the same in T-SQL: they're whitespace.
select #sql += [sql] + ';'
from [SqlTable]
order by
SnippetSequenceNr
Just get rid of the GO "statements". As noted by others you also might need to ensure the string is constructing in the correct statement sequence. Using += is probably not the best idea, though I'm not sure about the dynamic sql idea in the first place. It might actually be more appropriate to use a cursor here.
Sam F,
Your method didn't work with FOR XML method of string concatenation (if you wanted to create a "line" delimited list, based on values found in different rows of a table). However, replace the char(13) with SPACE(13), then it works great.
SELECT PackageNote = SUBSTRING((SELECT (SPACE(13) + char(10) + PackageDescription)
FROM POPackageNote PN2
WHERE PN1.PurchaseOrderNumber = PN2.PurchaseOrderNumber
ORDER BY POPackageNoteID, PackageDescription
FOR XML PATH( '' )
), 3, 1000 )
FROM POPackageNote PN1
WHERE (PurchaseOrderNumber = #PurchaseOrderNumber)
GROUP BY PurchaseOrderNumber

MSSQL: given a table's object_id, determine whether it is empty

For a bit of database-sanity checking code, I'd like to determine whether a particular object_id corresponds to an empty table.
Is there some way to (for instance) select count(*) from magic_operator(my_object_id) or similar?
I'd strongly prefer a pure-sql solution that can run on MS SQL server 2008b.
You can get a rough idea from
SELECT SUM(rows)
FROM sys.partitions p
WHERE index_id < 2 and p.object_id=#my_object_id
If you want guaranteed accuracy you would need to construct and execute a dynamic SQL string containing the two part object name. Example below though depending on how you are using this you may prefer to use sp_executesql and return the result as an output parameter instead.
DECLARE #DynSQL nvarchar(max) =
N'SELECT CASE WHEN EXISTS(SELECT * FROM ' +
QUOTENAME(OBJECT_SCHEMA_NAME(#my_object_id)) + '.' +
QUOTENAME(OBJECT_NAME(#my_object_id)) +
') THEN 0 ELSE 1 END AS IsEmpty'
EXECUTE (#DynSQL)
Well it depends on what do you consider as Pure sql
I've come up with the following solution. It is purely written in T-SQL but uses dynamically built query
-- Using variables just for better readability.
DECLARE #Name NVARCHAR(4000)
DECLARE #Schema NVARCHAR(4000)
DECLARE #Query NVARCHAR(4000)
-- Get the relevant data
SET #Schema = QUOTENAME(OBJECT_SCHEMA_NAME(613577224))
SET #Name = QUOTENAME(OBJECT_NAME(613577224))
-- Build query taking into consideration the schema and possible poor object naming
SET #Query = 'SELECT COUNT(*) FROM ' + #Schema + '.' + #Name + ''
-- execute it.
EXEC(#Query)
EDIT
The changes consider the possible faulty cases described in the comments.
I've outlined the variables, because this is a convenient approach for me. Cheers.

SQL 2005 - Linked Server to Oracle Queries Extremely Slow

On my SQL 2005 server, I have a linked server connecting to Oracle via the OraOLEDB.Oracle provider.
If I run a query through the 4 part identifier like so:
SELECT * FROM [SERVER]...[TABLE] WHERE COLUMN = 12345
It takes over a minute to complete. If I run the same query like so:
SELECT * FROM OPENQUERY(SERVER, 'SELECT * FROM TABLE WHERE COLUMN = 12345')
It completes instantly. Is there a setting I'm missing somewhere to get the first query to run in a decent period of time? Or am I stuck using openquery?
In your first example using "dot" notation, client cursor engine is used and most things are evaluated locally. If you're selecting from a large table and using a WHERE clause, the records will be pulled down locally from the remote db. Once the data has been pulled across the linked server, only then is the WHERE clause is applied locally. Often this sequence is a performance hit. Indexes on the remote db are basically rendered useless.
Alternately when you use OPENQUERY, SQL Server sends the sql statement to the target database for processing. During processing any indexes on the tables are leveraged. Also the where clause is applied on the Oracle side before sending the resultset back to SQL Server.
In my experience, except for the simplest of queries, OPENQUERY is going to give you better performance.
I would recommend using OpenQuery for everything for the above reasons.
One of the pain points when using OpenQuery that you may have already encountered is single quotes. If the sql string being sent to the remote db requires single quotes around a string or date date they need to be escaped. Otherwise they inadvertantly terminate the sql string.
Here is a template that I use whenever I'm dealing with variables in an openquery statement to a linked server to take care of the single quote problem:
DECLARE #UniqueId int
, #sql varchar(500)
, #linkedserver varchar(30)
, #statement varchar(600)
SET #UniqueId = 2
SET #linkedserver = 'LINKSERV'
SET #sql = 'SELECT DummyFunction(''''' + CAST(#UniqueId AS VARCHAR(10))+ ''''') FROM DUAL'
SET #statement = 'SELECT * FROM OPENQUERY(' + #linkedserver + ', '
SET #Statement = #Statement + '''' + #SQL + ''')'
EXEC(#Statement)