automatically placing results of a called procedure into a select statement - sql

I'm playing with some code from an article written by Peter Brawley found here on page 6 of the pdf. I'm trying to figure out how to automate it so that the result of the procedure is automatically placed in the select query. Right now what I am doing is calling the procedure, exporting the result into a text file, going to the text file manually (point click with mouse), copying the result and pasting it into a select statement. I haven't been able to figure out how to either insert the select statement into the procedure, or put the procedure into a table in my database or variable that I can call from the select statement. Any ideas?
Here is the sample code from Peter Brawley, that I've been trying to automate:
use database;
DROP PROCEDURE IF EXISTS writesumpivot;
DELIMITER |
CREATE PROCEDURE writesumpivot(
db CHAR(64), tbl CHAR(64), pivotcol CHAR(64), sumcol CHAR(64)
)
BEGIN
DECLARE datadelim CHAR(1) DEFAULT '"';
DECLARE comma CHAR(1) DEFAULT ',';
DECLARE singlequote CHAR(1) DEFAULT CHAR(39);
SET #sqlmode = (SELECT ##sql_mode);
SET ##sql_mode='';
SET #pivotstr = CONCAT( 'SELECT DISTINCT CONCAT(', singlequote,
',SUM(IF(', pivotcol, ' = ', datadelim, singlequote,
comma, pivotcol, comma, singlequote, datadelim,
comma, sumcol, ',0)) AS `',
singlequote, comma, pivotcol, comma, singlequote, '`',
singlequote, ') AS sumpivotarg FROM ', db, '.', tbl,
' WHERE ', pivotcol, ' IS NOT NULL' );
-- UNCOMMENT TO SEE THET MIDLEVEL SQL:
-- SELECT #pivotstr;
PREPARE stmt FROM #pivotstr;
EXECUTE stmt;
drop prepare stmt;
SET ##sql_mode=#sqlmode;
END
|
DELIMITER ;
call writesumpivot('database', 'table', 'pivotcol','sumcol');
Then the Select statement is as follows:
SELECT
infoField
[results of the call]
FROM
database.table
GROUP BY infoField;
Assuming I've ran the call, exported the results, copied them and pasted them into the select statement, my personal results of the call in the SELECT query would look something like this:
SELECT
infoField
,SUM(IF(pivotcol = "Yellow",sumcol,0)) AS `Yellow`
,SUM(IF(pivotcol = "Red",sumcol,0)) AS `Red`
,SUM(IF(pivotcol = "Purple",sumcol,0)) AS `Purple`
,SUM(IF(pivotcol = "Orange",sumcol,0)) AS `Orange`
,SUM(IF(pivotcol = "Green",sumcol,0)) AS `Green`
,SUM(IF(pivotcol = "Blue",sumcol,0)) AS `Blue`
,SUM(IF(pivotcol = "White",sumcol,0)) AS `White`
FROM database.table
GROUP BY infoField;
Running the above select statement gives me the pivot table that I need. I'm tryig to figure out how to incorporate this into a website, which is why it needs to be automated.
I tried inserting a create table, and then reference the table, but didn't get desired results.
Edited the last section of the PROCEDURE as follows:
--SELECT #pivotstr;
DROP TABLE IF EXISTS temp2;
CREATE TABLE IF NOT EXISTS temp2(sumpivotarg varchar(8000));
PREPARE stmt FROM #pivotstr;
...
changed call and select as follows:
call writesumpivot('database','table','pivotcol','sumcol');
insert into temp2(sumpivotarg) values(#pivotstr);
SELECT
table.infoField, temp2.sumpivotarg
FROM table, temp2
GROUP BY infoField
Results from this were the generic code rather than summing the contents of the cells in the database. it looks something like this:
infoField | sumpivotarg <-- Col Headings
123 | SELECT DISTINCT CONCAT('Sum(if(pivotcol=",pivotcol",sumcol,0)) AS'pivotcol,'')..
124 | SELECT DISTINCT CONCAT('Sum(if(pivotcol=",pivotcol",sumcol,0)) AS'pivotcol,'')..
125 | select DISTINCT CONCAT('Sum(if(pivotcol=",pivotcol",sumcol,0)) AS'pivotcol,'')..

I do not mean any disrespect towards mySQL, but this whole writing to a temp table solution for passing tablular data between stored procedures is suboptimal and dangerous (in real world transaction processing). I truly hope that the mySQL team will build in some enterprise level stored procedure functionality. Also, mySQL Functions not being able to return tables is a distinct disadvantage.
I have been slowly moving process over to Linux and mySQL from MSSQL. The short comings of mySQL in the procedure and function department is forcing some major kludgey type rewrites (ala temp tables and globals, etc).
I have been writing SPs for about 20 years (Sybase before SQL Server) and feel strongly that using dynamic SQL does not take advantage of the server side database. Many folks try to implement a Data layer at the client level, but the sever is better suited to this task. It is a natural division of functionality and data. Also, simultaneously running multiple precompiled calls at the server is quite a bit more optimal than repeated calls to the server, for the same processes.
Come on mySQL team, I am keeping my fingers crossed....

You could create a temp table in your DB. Use SQL insert to insert data into temp table as the result of the stored procedure execution. Afterwards you could use that temp table inside your select statement.
Here's an answer that shows how to do that:
Use result set of mysql stored procedure in another stored procedure
Just to mention a similar question:
MySQL How to INSERT INTO temp table FROM Stored Procedure

Related

SQL Update Statement based on Procedure in SAP HANA

I'm creating an update statement that generate SHA256 for table columns based on table's name
1st Step: I created a procedure that get the table columns, concatenate it all in one columns, then format to a desired format.
-- Procedure code : Extract table's columns list, concatenate it and format it
Create procedure SHA_PREP (in inp1 nvarchar(20))
as
begin
SELECT concat(concat('hash_sha256(',STRING_AGG(A, ', ')),')') AS Names
FROM (
SELECT concat('to_varbinary(IFNULL("',concat(COLUMN_NAME,'",''0''))')) as A
FROM SYS.TABLE_COLUMNS
WHERE SCHEMA_NAME = 'SCHEMA_NAME' AND TABLE_NAME = :inp1
AND COLUMN_NAME not in ('SHA')
ORDER BY POSITION
);
end;
/* Result of this procedures :
hash_sha256(
to_varbinary("ID"),to_varbinary(IFNULL("COL1",'0')),to_varbinary(IFNULL("COL2",'0')) )
*/
-- Update Statement needed
UPDATE "SCHEMA_NAME"."TABLE_NAME"
SET "SHA" = CALL "SCHEMA_NAME"."SHA_PREP"('SCHEMA_NAME')
WHERE "ID" = 99 -- a random filter
The solution by #SonOfHarpy technically works but has several issues, namely:
unnecessary use of temporary tables
overly complicated string assignment approach
use of fixed system table schema (SYS.TABLE_COLUMNS) instead of PUBLIC synonym
wrong data type and variable name for the input parameter
An improved version of the code looks like this:
create procedure SHA_PREP (in TABLE_NAME nvarchar(256))
as
begin
declare SQL_STR nvarchar(5000);
SELECT
'UPDATE "SCHEMA_NAME"."TABLE_NAME" SET "SHA"= hash_sha256(' || STRING_AGG(A, ', ') || ')'
into SQL_STR
FROM (
SELECT
'TO_VARBINARY(IFNULL("'|| "COLUMN_NAME" ||'",''0''))' as A
FROM TABLE_COLUMNS
WHERE
"SCHEMA_NAME" = 'SCHEMA_NAME'
AND "TABLE_NAME" = :TABLE_NAME
AND "COLUMN_NAME" != 'SHA'
ORDER BY POSITION
);
-- select :sql_str from dummy; -- this is for debugging output only
EXECUTE IMMEDIATE (:SQL_STR);
end;
By changing the CONCAT functions to the shorter || (double-pipe) operator, the code becomes a lot easier to read as the formerly nested function calls are now simple chained concatenations.
By using SELECT ... INTO variable the whole nonsense with the temporary table can be avoided, again, making the code easier to understand and less prone to problems.
The input parameter name now correctly reflects its meaning and mirrors the HANA dictionary data type for TABLE_NAME (NVARCHAR(256)).
The procedure now consists of two commands (SELECT and EXECUTE IMMEDIATE) that each performs an essential task of the procedure:
Building a valid SQL update command string.
Executing the SQL command.
I removed the useless line-comments but left a debugging statement as a comment in the code, so that the SQL string can be reviewed without having to execute the command.
For that to work, obviously, the EXECUTE... line needs to be commented out and the debugging line has to be uncommented.
What's more worrying than the construction of the solution is its purpose.
It looks as if the SHA column should be used as a kind of shorthand row-data fingerprint. The UPDATE approach certainly handles this as an after-thought activity but leaves the "finger-printing" for the time when the update gets executed.
Also, it takes an essential part of the table design (that the SHA column should contain the fingerprint) away from the table definition.
An alternative to this could be a GENERATED COLUMN:
create table test (aaa int, bbb int);
alter table test add (sha varbinary (256) generated always as
hash_sha256(to_varbinary(IFNULL("AAA",'0'))
, to_varbinary(IFNULL("BBB",'0'))
)
);
insert into test (aaa, bbb) values (12, 32);
select * from test;
/*
AAA BBB SHA
12 32 B6602F58690CA41488E97CD28153671356747C951C55541B6C8D8B8493EB7143
*/
With this, the "generator" approach could be used for table definition/modification time, but all the actual data handling would be automatically done by HANA, whenever values get changed in the table.
Also, no separate calls to the procedure will ever be necessary as the fingerprints will always be current.
I find a solution that suits my need, but maybe there's other easier or more suitable approchaes :
I added the update statement to my procedure, and inserted all the generated query into a temporary table column, the excuted it using EXECUTE IMMEDIATE
Create procedure SHA_PREP (in inp1 nvarchar(20))
as
begin
/* ********************************************************** */
DECLARE SQL_STR VARCHAR(5000);
-- Create a temporary table to store a query in
create local temporary table #temp1 (QUERY varchar(5000));
-- Insert the desirable query into the QUERY column (Temp Table)
insert into #temp1(QUERY)
SELECT concat('UPDATE "SCHEMA_NAME"."TABLE_NAME" SET "SHA" =' ,concat(concat('hash_sha256(',STRING_AGG(A, ', ')),')'))
FROM (
SELECT concat('to_varbinary(IFNULL("',concat(COLUMN_NAME,'",''0''))')) as A
FROM SYS.TABLE_COLUMNS
WHERE SCHEMA_NAME = 'SCHEMA_NAME' AND TABLE_NAME = :inp1
AND COLUMN_NAME not in ('SHA')
ORDER BY POSITION
);
end;
/* QUERY : UPDATE "SCHEMA_NAME"."TABLE_NAME" SET "SHA" =
hash_sha256(to_varbinary("ID"),to_varbinary(IFNULL("COL1",'0')),to_varbinary(IFNULL("COL2",'0'))) */
SELECT QUERY into SQL_STR FROM "SCHEMA_NAME".#temp1;
--Excuting the query
EXECUTE IMMEDIATE (:SQL_STR);
-- Dropping the temporary table
DROP TABLE "SCHEMA_NAME".#temp1;
/* ********************************************************** */
end;
Any other solution or improvement are well welcomed
Thank you

Store a database name in variable & then using it dynamically

I have a Table in my Database Which have name of all the Database of my Server
Table Look like
create Table #db_name_list(Did INT IDENTITY(1,1), DNAME NVARCHAR(100))
INSERT INTO #db_name_list
SELECT 'db_One ' UNION ALL
SELECT 'db_Two' UNION ALL
SELECT 'db_Three' UNION ALL
SELECT 'db_four' UNION ALL
SELECT 'db_five'
select * from #db_name_list
I have so many SP in my Database..Which uses multiple table and Join Them..
At Present I am using the SQL code like
Select Column from db_One..Table1
Left outer join db_two..Table2
on ....some Condition ....
REQUIREMENT
But I do not want to HARDCODE the DATABASE Name ..
I want store DataBase name in Variable and use that .
Reason :: I want to restore same Database with Different name and want to Run those SP..At Present we Cant Do ,Because I have used db_One..Table1
or db_two..Table2
I want some thing like ...
/SAMPLE SP/
CREATE PROCEDURE LOAD_DATA
AS
BEGIN
DECLARE #dbname nvarchar(500)
set #dbname=( SELECT DNAME FROM #db_name_list WHERE Did=1)
set #dbname2=( SELECT DNAME FROM #db_name_list WHERE Did=2)
PRINT #DBNAME
SELECT * FROM #dbname..table1
/* or */
SELECT * FROM #dbname2.dbo.table1
END
i.e using Variable Instead of Database name ..
But it thow error
"Incorrect syntax near '.'."
P.S This was posted by some else on msdn but the answer there was not clear & I had the same kind of doubt. So please help
You can't use a variable like this in a static sql query. You have to use the variable in dynamic sql instead, in order to build the query you want to execute, like:
DECLARE #sql nvarchar(500) = 'SELECT * FROM ' + #dbname + '.dbo.mytable'
EXEC(#sql);
There seem to be a couple of options for you depending on your circumstances.
1. Simple - Generalise your procedures
Simply take out the database references in your stored procedure, as there is no need to have an explicit reference to the database if it is running against the database it is stored in. Your select queries will look like:
SELECT * from schema.table WHERE x = y
Rather than
SELECT * from database.schema.table WHERE x = y
Then just create the stored procedure in the new database and away you go. Simply connect to the new database and run the SP. This method would also allow you to promote the procedure to being a system stored procedure, which would mean they were automatically available in every database without having to run CREATE beforehand. For more details, see this article.
2. Moderate - Dynamic SQL
Change your stored procedure to take a database name as a parameter, such as this example:
CREATE PROCEDURE example (#DatabaseName VARCHAR(200))
AS
BEGIN
DECLARE #SQL VARCHAR(MAX) = 'SELECT * FROM ['+#DatabaseName+'].schema.table WHERE x = y'
EXEC (#SQL)
END

SQL Server - find SPs which don't drop temp tables

(1) Is there a good/reliable way to query the system catalogue in order
to find all stored procedures which create some temporary tables in their
source code bodies but which don't drop them at the end of their bodies?
(2) In general, can creating temp tables in a SP and not dropping
them in the same SP cause some problems and if so, what problems?
I am asking this question in the contexts of
SQL Server 2008 R2 and SQL Server 2012 mostly.
Many thanks in advance.
Not 100% sure if this is accurate as I don't have a good set of test data to work with. First you need a function to count occurrences of a string (shamelessly stolen from here):
CREATE FUNCTION dbo.CountOccurancesOfString
(
#searchString nvarchar(max),
#searchTerm nvarchar(max)
)
RETURNS INT
AS
BEGIN
return (LEN(#searchString)-LEN(REPLACE(#searchString,#searchTerm,'')))/LEN(#searchTerm)
END
Next make use of the function like this. It searches the procedure text for the strings and reports when the number of creates doesn't match the number of drops:
WITH CreatesAndDrops AS (
SELECT procedures.name,
dbo.CountOccurancesOfString(UPPER(syscomments.text), 'CREATE TABLE #') AS Creates,
dbo.CountOccurancesOfString(UPPER(syscomments.text), 'DROP TABLE #') AS Drops
FROM sys.procedures
JOIN sys.syscomments
ON procedures.object_id = syscomments.id
)
SELECT * FROM CreatesAndDrops
WHERE Creates <> Drops
1) probably no good / reliable way -- though you can extract the text of sp's using some arcane ways that you can find in other places.
2) In general - no this causes no problems -- temp tables (#tables) are scope limited and will be flagged for removal when their scope disappears.
and table variables likewise
an exception is for global temp tables (##tables) which are cleaned up when no scope holds a reference to them. Avoid those guys -- there are usually (read almost always) better ways to do something than with a global temp table.
Sigh -- if you want to go down the (1) path then be aware that there are lots of pitfalls in looking at code inside sql server -- many of the helper functions and information tables will truncate the actual code down to a NVARCHAR(4000)
If you look at the code of sp_helptext you'll see a really horrible cursor that pulls the actual text..
I wrote this a long time ago to look for strings in code - you could run it on your database -- look for 'CREATE TABLE #' and 'DROP TABLE #' and compare the outputs....
DECLARE #SearchString VARCHAR(255) = 'DELETE FROM'
SELECT
[ObjectName]
, [ObjectText]
FROM
(
SELECT
so.[name] AS [ObjectName]
, REPLACE(comments.[c], '#x0D;', '') AS [ObjectText]
FROM
sys.objects AS so
CROSS APPLY (
SELECT CAST([text] AS NVARCHAR(MAX))
FROM syscomments AS sc
WHERE sc.[id] = so.[object_id]
FOR XML PATH('')
)
AS comments ([c])
WHERE
so.[is_ms_shipped] = 0
AND so.[type] = 'P'
)
AS spText
WHERE
spText.[ObjectText] LIKE '%' + #SearchString + '%'
Or much better - use whatever tool of choice you like on your codebase - you've got all your sp's etc scripted out into source control somewhere, right.....?
I think SQL Search tool from red-gate would come handy in this case. You can download from here. This tool will find the sql text within stored procedures, functions, views etc...
Just install this plugin and you can find sql text easily from SSMS.

Select all record from all the tables, every derived table must have its own alias

I'm working on a e-learning project in which there is a table named chapter in which there is a column named question_table this is table in which the specific chapter's questions are added.
Now the problem is I want to display all the question from all the chapter for this I used following sql query
SELECT * FROM (SELECT `question_table` FROM `chapter`)
but it doesn't work and gives the error:
"Every derived table must have its own alias".
Note: I want to do it using SQL not PHP.
Firstly, I think you would be better redesigning your database. Multiple tables of the same structure holding the same data are generally not a good idea.
However what you require is possible using a MySQL procedure to build up some dynamic SQL and then execute it, returning the resulting data.
A procedure as follows could be used to do this:-
DROP PROCEDURE IF EXISTS dynamic;
delimiter //
CREATE PROCEDURE dynamic()
BEGIN
DECLARE question_table_value VARCHAR(25);
DECLARE b INT DEFAULT 0;
DECLARE c TEXT DEFAULT '';
DECLARE cur1 CURSOR FOR SELECT `question_table` FROM `chapter`;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET b = 1;
OPEN cur1;
SET b = 0;
WHILE b = 0 DO
FETCH cur1 INTO question_table_value;
IF b = 0 THEN
IF c = '' THEN
SET c = CONCAT('SELECT * FROM `',question_table_value, '`');
ELSE
SET c = CONCAT(c, ' UNION SELECT * FROM `',question_table_value, '`');
END IF;
END IF;
END WHILE;
CLOSE cur1;
SET #stmt1 := c;
PREPARE stmt FROM #stmt1;
EXECUTE stmt;
END
This is creating a procedure called dynamic. This takes no parameters. It sets up a cursor to read the question_table column values from the chapter table. It looks around the results from that, building up a string which contains the SQL, which is a SELECT from each table with the results UNIONed together. This is then PREPAREd and executed. The procedure will return the result set from the SQL executed by default.
You can call this to return the results using:-
CALL dynamic()
Down side is that this isn't going to give nice results if there are no rows to return and they are not that easy to maintain or debug with the normal tools developers have. Added to which very few people have any real stored procedure skills to maintain it in future.
In MySQL you must give every subquery ("derived table") an alias:
SELECT * FROM (SELECT question_table FROM chapter) t --notice the alias "t"
The derived table here is the result of the (SELECT ...). You need to give it an alias, like so:
SELECT * FROM (SELECT question_table FROM chapter) X;
Edit, re dynamic tables
If you know all the tables in advance, you can union them, i.e.:
SELECT * FROM
(
SELECT Col1, Col2, ...
FROM Chapter1
UNION
SELECT Col1, Col2, ...
FROM Chapter2
UNION
...
) X;
SqlFiddle here
To do this solution generically, you'll need to use dynamic sql to achieve your goal.
In general however, this is indicative of a smell in your table design - your chapter data should really be in one table, and e.g. classified by the chapter id.
If you do need to shard data for scale or performance reasons, the typical mechanism for doing this is to span multiple databases, not tables in the same database. MySql can handle large numbers of rows per table, and performance won't be an issue if the table is indexed appropriately.

Is my stored procedure executing out of order?

Brief history:
I'm writing a stored procedure to support a legacy reporting system (using SQL Server Reporting Services 2000) on a legacy web application.
In keeping with the original implementation style, each report has a dedicated stored procedure in the database that performs all the querying necessary to return a "final" dataset that can be rendered simply by the report server.
Due to the business requirements of this report, the returned dataset has an unknown number of columns (it depends on the user who executes the report, but may have 4-30 columns).
Throughout the stored procedure, I keep a column UserID to track the user's ID to perform additional querying. At the end, however, I do something like this:
UPDATE #result
SET Name = ppl.LastName + ', ' + ppl.FirstName
FROM #result r
LEFT JOIN Users u ON u.id = r.userID
LEFT JOIN People ppl ON ppl.id = u.PersonID
ALTER TABLE #result
DROP COLUMN [UserID]
SELECT * FROM #result r ORDER BY Name
Effectively I set the Name varchar column (that was previously left NULL while I was performing some pivot logic) to the desired name format in plain text.
When finished, I want to drop the UserID column as the report user shouldn't see this.
Finally, the data set returned has one column for the username, and an arbitrary number of INT columns with performance totals. For this reason, I can't simply exclude the UserID column since SQL doesn't support "SELECT * EXCEPT [UserID]" or the like.
With this known (any style pointers are appreciated but not central to this problem), here's the problem:
When I execute this stored procedure, I get an execution error:
Invalid column name 'userID'.
However, if I comment out my DROP COLUMN statement and retain the UserID, the stored procedure performs correctly.
What's going on? It certainly looks like the statements are executing out of order and it's dropping the column before I can use it to set the name strings!
[Edit 1]
I defined UserID previously (the whole stored procedure is about 200 lies of mostly irrelevant logic, so I'll paste snippets:
CREATE TABLE #result ([Name] NVARCHAR(256), [UserID] INT);
Case sensitivity isn't the problem but did point me to the right line - there was one place in which I had userID instead of UserID. Now that I fixed the case, the error message complains about UserID.
My "broken" stored procedure also works properly in SQL Server 2008 - this is either a 2000 bug or I'm severely misunderstanding how SQL Server used to work.
Thanks everyone for chiming in!
For anyone searching this in the future, I've added an extremely crude workaround to be 2000-compatible until we update our production version:
DECLARE #workaroundTableName NVARCHAR(256), #workaroundQuery NVARCHAR(2000)
SET #workaroundQuery = 'SELECT [Name]';
DECLARE cur_workaround CURSOR FOR
SELECT COLUMN_NAME FROM [tempdb].INFORMATION_SCHEMA.Columns WHERE TABLE_NAME LIKE '#result%' AND COLUMN_NAME <> 'UserID'
OPEN cur_workaround;
FETCH NEXT FROM cur_workaround INTO #workaroundTableName
WHILE ##FETCH_STATUS = 0
BEGIN
SET #workaroundQuery = #workaroundQuery + ',[' + #workaroundTableName + ']'
FETCH NEXT FROM cur_workaround INTO #workaroundTableName
END
CLOSE cur_workaround;
DEALLOCATE cur_workaround;
SET #workaroundQuery = #workaroundQuery + ' FROM #result ORDER BY Name ASC'
EXEC(#workaroundQuery);
Thanks everyone!
A much easier solution would be to not drop the column, but don't return it in the final select.
There are all sorts of reasons why you shouldn't be returning select * from your procedure anyway.
EDIT: I see now that you have to do it this way because of an unknown number of columns.
Based on the error message, is the database case sensitive, and so there's a difference between userID and UserID?
This works for me:
CREATE TABLE #temp_t
(
myInt int,
myUser varchar(100)
)
INSERT INTO #temp_t(myInt, myUser) VALUES(1, 'Jon1')
INSERT INTO #temp_t(myInt, myUser) VALUES(2, 'Jon2')
INSERT INTO #temp_t(myInt, myUser) VALUES(3, 'Jon3')
INSERT INTO #temp_t(myInt, myUser) VALUES(4, 'Jon4')
ALTER TABLE #temp_t
DROP Column myUser
SELECT * FROM #temp_t
DROP TABLE #temp_t
It says invalid column for you. Did you check the spelling and ensure there even exists that column in your temp table.
You might try wrapping everything preceding the DROP COLUMN in a BEGIN...COMMIT transaction.
At compile time, SQL Server is probably expanding the * into the full list of columns. Thus, at run time, SQL Server executes "SELECT UserID, Name, LastName, FirstName, ..." instead of "SELECT *". Dynamically assembling the final SELECT into a string and then EXECing it at the end of the stored procedure may be the way to go.