In SQL Server I could copy sql code out of an application and paste it into SSMS, declare & assign vars that exist in the sql and run. yay great debugging scenario.
E.g. (please note I am rusty and syntax may be incorrect):
declare #x as varchar(10)
set #x = 'abc'
select * from sometable where somefield = #x
I want to do something similar with Postgres in pgAdmin (or another postgres tool, any recommendations?) where I can just drop my SQL (params & all) into something that will run against Postgres DB.
I realise you can create pgscript, but it doesn't appear to be very good, for example, if I do the equivalent of above, it doesn't put the single quotes around the value in #x, nor does it let me by doubling them up and you don't get a table out after - only text...
Currently I have a piece of SQL someone has written that has 3 unique variables in it which are used around 6 times each...
So the question is how do other people debug SQL efficiently, preferably in a similar fashion to my SQL Server days.
You can achieve this using the PREPARE, EXECUTE, DEALLOCATE commands for handling statements, which is really what we are talking about here.
For example:
PREPARE test AS SELECT * FROM users WHERE first_name = $1;
EXECUTE test ('paul');
DEALLOCATE test;
Perhaps not as graphical as some may like, but certainly workable.
I would give a shot at writing a SQL function that wraps your query. It can be something as simple as
CREATE OR REPLACE FUNCTION my_function(integer, integer)
RETURNS integer
AS
$$
SELECT $1 + $2;
$$
LANGUAGE SQL;
SELECT my_function(1, 2);
I would do this instead of a PREPARE since it will be simpler to update it. Depending on how complex the function is, you might want to also look at some of the other PL's in Postgres.
SQL procs are notoriously hard to debug. My lame but practical solution has been to write log messages to a log table, like this (please excuse syntax issues):
create table log_message (
log_timestamp timestamp not null default current_timestamp,
message varchar(1000)
);
then add lines to your stored proc like:
insert into log_message (message) values ("The value of x is " || #x);
Then after a run:
select * from log_message order by 1;
It's not pretty, but works in every DB.
Related
Summary
Is there an efficient way to run large numbers of Dynamic SQL (on SQL Server 2005)?
Details
Our system allows users to create "email alert" subscriptions - where new matches on the system are emailed to them on a daily basis.
The subscription allows for multiple options, including the use of search keywords. A parser written by myself outputs the appropriate SQL code, taking into account and, or and brackets (). The parser will not allow anything through that could be used for SQL Injection.
For example, the keywords might be entered by the user as this (that or other) and the resultant query would end up roughly as...
SELECT *
FROM [VW_EMAIL_ALERT]
WHERE ([SEARCH] LIKE '%this%' AND ([SEARCH] LIKE '%that%' OR [SEARCH] LIKE '%other%'))
Each night, all those subscriptions are processed individually, because each one is potentially unique. The result is that the batch processing has to run a cursor over every subscription and run the SQL through sp_executesql.
Obviously this is highly inefficient, and can cause serious overloading - leading in some cases to timeouts. The stored-procedure that runs this processing is coded to split the subscriptions into blocks, so they're not all being called at once.
Is there a better/more efficient way to do this?
Note: Unfortunately we are currently stuck supporting a minimum of SQL Server 2005, as some of our clients still use that technology
If you are looking for keywords that is the least efficient way you could do it
A like '%anything does not use an index
Use a FullText search to index the words
Or write you own parser to index the unique words
You would build up a keywords table
And index the keyword
This is a very efficient query
select id
from keywords
where keyword = 'this'
intersect
select id
from keywords
where keyword in ( 'that','other')
Even with wildcards in the keywords it is still much more efficient than searching the entire text
I hope this will help. At my work, we replaced cursor with this kind of implementation.
DECLARE
#strSQL NVARCHAR(MAX) = ''
CREATE TABLE #tmp
(
Result_Query VARCHAR(MAX)
)
INSERT INTO
#tmp
SELECT
'SELECT * FROM [VW_EMAIL_ALERT] WHERE ([SEARCH] = ''%this%'' AND ([SEARCH] = ''%that%'' OR [SEARCH] = ''%other%''))'
UNION
SELECT
'SELECT * FROM [VW_EMAIL_ALERT] WHERE ([SEARCH] = ''%this1%'' AND ([SEARCH] = ''%that1%'' OR [SEARCH] = ''%other1%''))'
SELECT
#strSQL = #strSQL + Result_Query + ';'
FROM
#tmp
SET
#strSQL = LEFT(#strSQL, LEN(#strSQL) - 1)
PRINT #strSQL
EXEC(#strSQL)
The issue I'm facing is that I have a stored procedure (lets call it sp_one), which during it's run calls another stored procedure (lets call it sp_two).
I'd only like the resultset from sp_one to be returned at the end, and not those from sp_two. I imagine there is a way to capture the results from sp_two that will prevent them from also being returned but haven't been able to figure the syntax for this.
Any ideas?
Some pseudo code which captures the essence of what is going on (not my actual code):
CREATE PROCEDURE sp_two AS
BEGIN
update Users
set is_valid = 0
select * from Users
END
CREATE PROCEDURE sp_one
AS
BEGIN
exec sp_two
select * from Users
END
exec sp_one
The result of running exec sp_one is the resultset from sp_two, then the results from sp_one. (eg. the users table twice).
First of all, here is a similiar question
I don't recommand to use this kind of solution, because it could be a bottleneck easily. I would say you should focus on to make the dataprocessing on a much clearer way (however I understand that your question's example is just a theoretical example)
But if you really want to use something like this I would say measure the danger of the returning rows:
1: How many rows returns?
2: How wide is the returning set?
And if you think "ok it is not a big deal", then I would say use a memory table instead of temp table (do not make physical writes):
DECLARE #users TABLE (...fields here...)
INSERT INTO #users
EXEC sp_two
In sp_one, you can use
CREATE TABLE #temporaryusers (Usertable fields here)
INSERT INTO #temporaryusers
EXEC sp_two
DROP TABLE #temporaryusers
to swallow your results.
I'd like to use a stored procedure to define the IN clause of a select statement.
This is (a simplified version of) what I'm trying to do:
SELECT *
FROM myTable
WHERE columnName IN (CALL myStoredProc)
myStoredProc performs some complicated logic in the database and returns a list of possible matching values for columnName. The statement above does not work obviously. The select statement may be performed in another stored procedure if that makes a difference.
Is this at all possible in mySQL?
What return type does your current stored procedure have? You are speaking of "a list", so TEXT?
Maybe there's an easier way, but one thing you can do (inside another stored procedure) is to build another query.
To do that, we need to work around two limitations of MySQL: a) To execute dynamic SQL inside a stored procedure, it needs to be a prepared statement. b) Prepared statements can only be created out of user variables. So the complete SQL is:
SET #the_list = myStoredProc();
SET #the_query = CONCAT('SELECT * FROM myTable WHERE columnName IN (' , #the_list , ')');
PREPARE the_statement FROM #the_query;
EXECUTE the_statement;
If you're talking about returning a result set from a stored routine and then using it as table, that is not possible. You need to make a temporary table to work around this limitation.
I need to make a SELECT with a call of a stored procedure in the WHERE clause.
It should be something like that....
SELECT distinct top 10 i.x, d.droit
FROM v_droit d, v_info i
WHERE d.nomdroit='yy'
AND i.id<>2
AND (select val from (exec up_droits(i.x, d.droit)) <>3
But it does not work...
Any idea?
Don't say to replace the stored procedure with a function because is not possible to use the existing code in a function. So the function is not a valid option. I really need to be able to use a stored procedure
This is achieved by first executing the stored procedure, capturing the output into a #temp table or a #tabel variable, then running your query against the table. Something like this:
declare #droits_table (val ,... );
insert into #droits_table
exec up_droits(param, param);
SELECT distinct top 10 i.x, d.droit FROM v_droit d, v_info i WHERE d.nomdroit='yy' AND i.id<>2 AND (select val from #droits) <>3
Of course this will not work for you because the up_droits needs the i.x and d.droit parameters from the query. This indicates that your stored procedure should probably be a a view or table valued function.
Sorry but, make it a table valued function rather than stored procedure.
Eg:
Scalar - SELECT id, name FROM test WHERE id < (SELECT dbo.mytestfunction())
Table - SELECT id, name FROM test WHERE id = (SELECT col1 from dbo.mytestfunction())
You can't. The content of the WHERE clause must be a search expression.
Is the reason that the code doesn't work as a function because it modifies some data? If so, then you're out of luck, functions used in where clauses must be immutable.
If the stored procedure doesn't modify any data, you may be able to wrap it inside of a function.
If you are on SQL Server I don't think you can do what you propose.
But one thing you can do is build dynamic queries, but be careful doing it because they open up many interesting problemareas.
The syntax is :
EXEC #<query>
But anotherthing you can do, which is probably much better for you, is to make the up_droits function deliver it's results in a temp table, if you select into a #table it is temporary for the duration of your function/procedure scope
declare procedure up_droits() as
select val .. into #temp
So what you do is create a procedure
create procedure Top10FromDroit
begin
exec up_droits
SELECT distinct top 10 i.x, d.droit FROM v_droit d, v_info i WHERE d.nomdroit='yy' AND i.id2 AND (select val from (#temp) 3
Hopefully that will give you the results you want to achieve.
If at first you don't succeed, code around it^^
Could anyone of you explain reasons for executing dynamic SQl inside stored procedure. I know very few situations when you need them - but really very few. 99.9% (or 999 of a 1000) of execute strings could be rewritten as normal sql statements with parameters.
The very same is with Selects that have functions inside select or where clauses.
Try to think about your sets of data, not about procedural ways how to solve it.
I am using Microsoft SQL server 2005. I need to sync data between SQL server and an Oracle db. First thing I need is to find out if the count of data on Oracle side with certain filters(here I use ID as a simple example).
SELECT COUNT(*) FROM oracleServer..owner.table1 WHERE id = #id;
The problem I have is that the table on the lined server or Oracle is very big with 4M rows of data. The above query took about 2minutes to get data back. This code is just a simplied piece. Actually my SP has some other queries to update, insert data from the lined server to my SQL server. The SP took hours or 10+ hours to run with large Oracle db. Therefore T-SQL with lined server is not good for me.
Recently I found OPENQUERY and EXEC (...) AT linedServer. OPENQUERY() is very fast. It took about 0 time to get the same result. However, it does not support variable query or expressions. The query has to be a literal constant string.
EXEC() is in the same way to pass-through query to Oracle. It is fast as well. For example:
EXEC ('SELECT COUNT(*) FROM owner.table1 WHERE id = ' + CAST(#id AS VARCHAR))
AT oracleServer
The problem I have is how to pass the result COUNT(*) back. I tried to google examples in web and msdn. All I can find are SQL or ExpressSQL linedServer examples like:
EXEC ('SELECT ? = COUNT(*) FROM ...', #myCount OUTPUT) AT expressSQL
This query does not work for Oracle. It seems in Oracle, you can set value as output in this way:
SELECT COUNT(*) INTO myCount ...
I tried this:
EXEC ('SELECT COUNT(*) INTO ? FROM ...', #myCount OUTPUT) AT oracleServer
EXEC ('SELECT COUNT(*) INTO : FROM ...', #myCount OUTPUT) AT oracleServer
EXEC ('SELECT : = COUNT(*) FROM ...', #myCount OUTPUT) AT oracleServer
None of those working. I got error message saying query not executable on Oracle server.
I could write a .Net SQL Server project to do the job. Before that, I just wonder if there is anyway way to pass value out as oupput parameter so that I put the better performance T-SQL codes in my SP?
Just a quick update on this. I think I got the solution. I found it in a discussion on a similar issue at Dev NewsGroup. Based on the information, I tried this:
DECLARE #myCount int;
DECLARE #sql nvarchar(max);
set #sql =
N'BEGIN
select count(*) into :myCount from DATAPARC.CTC_MANUAL_DATA;
END;'
EXEC (#sql, #myCount OUTPUT) AT oracleServer;
PRINT #myCount; -- 3393065
Wa! I got the result back in 3 seconds comparing T-SQL query directly on Orable DB (+2minutes). The important thing is to use "BEGIN" and "END;" to wrap the query as anonymous block and don't miss ";" after END
You need anonymous block for output parameters. If you only have input or no parameters, you don't need the block and the query works fine.
Enjoy it! By the way, this is a quick update. If you don't see me again, I would not have any trouble on this issue.
With Linked Services the biggest issue is performance (IMHO)
[linkedserver]...[dbo.RemoteTable] vs OPENQUERY(linkedserver, 'Select * from dbo.RemoteTable') always use the second one
Now to answer the question. OPENQUERY and EXEC() AT is much quicker.
EXEC(Select * from dbo.RemoteTable) AT linkedserver will show the results, but there is no way to re-use.
My simple solution:
SELECT * INTO LocalTable FROM OPENQUERY(linkedserver, 'Select * from dbo.RemoteTable')
OR
INSERT INTO LocalTable SELECT * FROM OPENQUERY(linkedserver, 'Select * from dbo.RemoteTable')
much^10 faster than
SELECT * INTO LocalTable FROM [linkedserver]...[dbo.RemoteTable]