Pivoting Date in SQL Server - sql

I have the following code in SQL Server that is generating a count of users by Code and by Date:
SELECT
code, CONVERT(DATE, created_at) Date, COUNT(account_id) UserCount
FROM
Code_Table
GROUP BY
code, CONVERT(DATE, created_at)
This just generates a count of users by each unique code and date combination. I'm attempting to use the Pivot function to get:
the list of unique codes in the first column
a column for each unique date, starting in the second column
a count of each user associated with the date-code combinations populating the table.
One issue I'm running into is: I would like to set this query to update automatically daily, which will add an additional column with the most recent date each day. The only pivots functions I've found require a declaration of the number of columns that are being created from rows.
I understand this would be much more easily done in Excel w/ a Pivot, but I don't currently have that option.
Any advice would be much appreciated!

You need a dynamic pivot to accomplish this. You're correct that you need an explicit column list -- so you'll query your table and actually generate SQL syntax as the result, and then execute that syntax with sp_executesql.
You can find that on Stack Overflow:
SQL Server dynamic PIVOT query?
A word of warning: This is usually not best practice. You won't be able to do any filtering or any date-related logic on this result set. Whatever front end reporting software you are using is probably where you want to implement the matrix/crosstab like behavior that you're getting from pivoting.

Try using this store procedure:
CREATE PROCEDURE GetData
-- Add the parameters for the stored procedure here
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
DECLARE #columns varchar(MAX) = '';
SELECT #columns = #columns + '[' + CONVERT(VARCHAR,ct.created_at) + '],'
FROM Code_Table ct
GROUP BY created_at
ORDER BY created_at ASC;
SET #columns = SUBSTRING(#columns,1,LEN(#columns) - 1);
DECLARE #query varchar(MAX)= 'SELECT
*
FROM
(SELECT code,account_id,created_at from Code_Table) AS result
PIVOT
(
COUNT(account_id) FOR created_at IN ('+ #columns +')
) p'
;
EXEC (#query)
END
I built the column's header dynamically depending of existing values of dates.

Related

Create function/procedure to return table to get the last rows

I have several tables, each with a different key.
For example: the key for the Customer could be 2 or more columns.
Input - dbo.customer:
Customer e_Date Value
------------------------
1000 2019-05-26 200
1000 2019-05-25 100
2000 2019-04-23 50
2000 2019-04-21 20
Output :
Customer e_date value
----------------------------
1000 2019-05-26 200
2000 2019-04-23 50
The max dates and the values for them was return for each customer (key).
I want to build a function or procedure in SQL where I will enter the name of table and the key and will return me the output. A return table function.
exec procedure get_Last_Row_By_Key (#Table_Name, #Key)
and it will show me the output.
In this example :
exec procedure get_Last_Row_By_Key ('dbo.customer', Customer)
I guess that when the #key will be multiple value I can do concat of the other columns to make them a one column key.
put below query in your function using row_number() window function
select customer,e_date,value from
(select *,row_number()over(partition by customer order by e_date desc) rn
from table ) a where a.rn=1
The function call would need to be something like this:
exec procedure get_Last_Row_By_Key ('dbo.customer', 'Customer', 'e_date')
To support multiple keys, I would use row_number(), even though that adds an extra column to the result set. So the dynamic SQL would look like:
declare #sql nvarchar(max);
set #sql = '
select t.*
from (select t.*,
row_number() over (partition by [key] order by [datecol]) as seqnum
from [table] t
) t
where seqnum = 1
';
set #sql = replace(#sql, '[table]', #table);
set #sql = replace(#sql, '[key]', #key);
set #sql = replace(#sql, '[datecol]', #datecol);
exec sp_executesql #sql;
Note: I am explicitly not using quotename() here, so the code will allow multiple columns for either the key order "datecol".
Also, as an exercise, this might be useful for learning about dynamic SQL and stored procedures. In general though, such attempts at "generic" processing are not as useful as they seem. People who know SQL know how to write the query to do what they want on a table; they do not know custom stored procedures that do the same thing.

dynamic sql not working . Regular sql working [duplicate]

It looks like #temptables created using dynamic SQL via the EXECUTE string method have a different scope and can't be referenced by "fixed" SQLs in the same stored procedure.
However, I can reference a temp table created by a dynamic SQL statement in a subsequence dynamic SQL but it seems that a stored procedure does not return a query result to a calling client unless the SQL is fixed.
A simple 2 table scenario:
I have 2 tables. Let's call them Orders and Items. Order has a Primary key of OrderId and Items has a Primary Key of ItemId. Items.OrderId is the foreign key to identify the parent Order. An Order can have 1 to n Items.
I want to be able to provide a very flexible "query builder" type interface to the user to allow the user to select what Items he want to see. The filter criteria can be based on fields from the Items table and/or from the parent Order table. If an Item meets the filter condition including and condition on the parent Order if one exists, the Item should be return in the query as well as the parent Order.
Usually, I suppose, most people would construct a join between the Item table and the parent Order tables. I would like to perform 2 separate queries instead. One to return all of the qualifying Items and the other to return all of the distinct parent Orders. The reason is two fold and you may or may not agree.
The first reason is that I need to query all of the columns in the parent Order table and if I did a single query to join the Orders table to the Items table, I would be repoeating the Order information multiple times. Since there are typically a large number of items per Order, I'd like to avoid this because it would result in much more data being transfered to a fat client. Instead, as mentioned, I would like to return the two tables individually in a dataset and use the two tables within to populate a custom Order and child Items client objects. (I don't know enough about LINQ or Entity Framework yet. I build my objects by hand). The second reason I would like to return two tables instead of one is because I already have another procedure that returns all of the Items for a given OrderId along with the parent Order and I would like to use the same 2-table approach so that I could reuse the client code to populate my custom Order and Client objects from the 2 datatables returned.
What I was hoping to do was this:
Construct a dynamic SQL string on the Client which joins the orders table to the Items table and filters appropriate on each table as specified by the custom filter created on the Winform fat-client app. The SQL build on the client would have looked something like this:
TempSQL = "
INSERT INTO #ItemsToQuery
OrderId, ItemsId
FROM
Orders, Items
WHERE
Orders.OrderID = Items.OrderId AND
/* Some unpredictable Order filters go here */
AND
/* Some unpredictable Items filters go here */
"
Then, I would call a stored procedure,
CREATE PROCEDURE GetItemsAndOrders(#tempSql as text)
Execute (#tempSQL) --to create the #ItemsToQuery table
SELECT * FROM Items WHERE Items.ItemId IN (SELECT ItemId FROM #ItemsToQuery)
SELECT * FROM Orders WHERE Orders.OrderId IN (SELECT DISTINCT OrderId FROM #ItemsToQuery)
The problem with this approach is that #ItemsToQuery table, since it was created by dynamic SQL, is inaccessible from the following 2 static SQLs and if I change the static SQLs to dynamic, no results are passed back to the fat client.
3 around come to mind but I'm look for a better one:
1) The first SQL could be performed by executing the dynamically constructed SQL from the client. The results could then be passed as a table to a modified version of the above stored procedure. I am familiar with passing table data as XML. If I did this, the stored proc could then insert the data into a temporary table using a static SQL that, because it was created by dynamic SQL, could then be queried without issue. (I could also investigate into passing the new Table type param instead of XML.) However, I would like to avoid passing up potentially large lists to a stored procedure.
2) I could perform all the queries from the client.
The first would be something like this:
SELECT Items.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter)
SELECT Orders.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter)
This still provides me with the ability to reuse my client sided object-population code because the Orders and Items continue to be returned in two different tables.
I have a feeling to, that I might have some options using a Table data type within my stored proc, but that is also new to me and I would appreciate a little bit of spoon feeding on that one.
If you even scanned this far in what I wrote, I am surprised, but if so, I woul dappreciate any of your thoughts on how to accomplish this best.
You first need to create your table first then it will be available in the dynamic SQL.
This works:
CREATE TABLE #temp3 (id INT)
EXEC ('insert #temp3 values(1)')
SELECT *
FROM #temp3
This will not work:
EXEC (
'create table #temp2 (id int)
insert #temp2 values(1)'
)
SELECT *
FROM #temp2
In other words:
Create temp table
Execute proc
Select from temp table
Here is complete example:
CREATE PROC prTest2 #var VARCHAR(100)
AS
EXEC (#var)
GO
CREATE TABLE #temp (id INT)
EXEC prTest2 'insert #temp values(1)'
SELECT *
FROM #temp
1st Method - Enclose multiple statements in the same Dynamic SQL Call:
DECLARE #DynamicQuery NVARCHAR(MAX)
SET #DynamicQuery = 'Select * into #temp from (select * from tablename) alias
select * from #temp
drop table #temp'
EXEC sp_executesql #DynamicQuery
2nd Method - Use Global Temp Table:
(Careful, you need to take extra care of global variable.)
IF OBJECT_ID('tempdb..##temp2') IS NULL
BEGIN
EXEC (
'create table ##temp2 (id int)
insert ##temp2 values(1)'
)
SELECT *
FROM ##temp2
END
Don't forget to delete ##temp2 object manually once your done with it:
IF (OBJECT_ID('tempdb..##temp2') IS NOT NULL)
BEGIN
DROP Table ##temp2
END
Note: Don't use this method 2 if you don't know the full structure on database.
I had the same issue that #Muflix mentioned. When you don't know the columns being returned, or they are being generated dynamically, what I've done is create a global table with a unique id, then delete it when I'm done with it, this looks something like what's shown below:
DECLARE #DynamicSQL NVARCHAR(MAX)
DECLARE #DynamicTable VARCHAR(255) = 'DynamicTempTable_' + CONVERT(VARCHAR(36), NEWID())
DECLARE #DynamicColumns NVARCHAR(MAX)
--Get "#DynamicColumns", example: SET #DynamicColumns = '[Column1], [Column2]'
SET #DynamicSQL = 'SELECT ' + #DynamicColumns + ' INTO [##' + #DynamicTable + ']' +
' FROM [dbo].[TableXYZ]'
EXEC sp_executesql #DynamicSQL
SET #DynamicSQL = 'IF OBJECT_ID(''tempdb..##' + #DynamicTable + ''' , ''U'') IS NOT NULL ' +
' BEGIN DROP TABLE [##' + #DynamicTable + '] END'
EXEC sp_executesql #DynamicSQL
Certainly not the best solution, but this seems to work for me.
I would strongly suggest you have a read through http://www.sommarskog.se/arrays-in-sql-2005.html
Personally I like the approach of passing a comma delimited text list, then parsing it with text to table function and joining to it. The temp table approach can work if you create it first in the connection. But it feel a bit messier.
Result sets from dynamic SQL are returned to the client. I have done this quite a lot.
You're right about issues with sharing data through temp tables and variables and things like that between the SQL and the dynamic SQL it generates.
I think in trying to get your temp table working, you have probably got some things confused, because you can definitely get data from a SP which executes dynamic SQL:
USE SandBox
GO
CREATE PROCEDURE usp_DynTest(#table_type AS VARCHAR(255))
AS
BEGIN
DECLARE #sql AS VARCHAR(MAX) = 'SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = ''' + #table_type + ''''
EXEC (#sql)
END
GO
EXEC usp_DynTest 'BASE TABLE'
GO
EXEC usp_DynTest 'VIEW'
GO
DROP PROCEDURE usp_DynTest
GO
Also:
USE SandBox
GO
CREATE PROCEDURE usp_DynTest(#table_type AS VARCHAR(255))
AS
BEGIN
DECLARE #sql AS VARCHAR(MAX) = 'SELECT * INTO #temp FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = ''' + #table_type + '''; SELECT * FROM #temp;'
EXEC (#sql)
END
GO
EXEC usp_DynTest 'BASE TABLE'
GO
EXEC usp_DynTest 'VIEW'
GO
DROP PROCEDURE usp_DynTest
GO

How to check if a value exists in any of the columns in a table in sql

Say, I have 100 columns in a table. I do not know in which columns a particular value could exist. So, I would like to check across all columns, if it exists in any of the 100 columns, I would like to select it.
I searched around a bit, and in most places the solution seems to be something like the following
select *
from tablename
where col1='myval'
or col2='myval'
or col3='myval'
or .. or col100='myval'
I also read a few forums where having to do this is said to be a bad case of database design, I agree, but I'm working on an already existing table in a database.
Is there a more intelligent way to do this?
One way is by reversing the In operator
select *
from yourtable
where 'Myval' in (col1,col2,col3,...)
If you don't want to manually type the columns use dynamic sql to generate the query
declare #sql varchar(max)='select *
from yourtable
where ''Myval'' in ('
select #sql+=quotename(column_name)+',' from INFORMATION_SCHEMA.COLUMNS
where TABLE_NAME='yourtable'
select #sql =left(#sql,len(#sql)-1)+')'
--print #sql
exec sp_executesql #sql

How do I insert the results from "EXEC()" in a temp table

I need som help with a problem, our company have a vendor that deliver a database to us. inside that database, the vendor has a table with alot of t sql scripts. What i want to do is the following, i want to make a select to find the script and then execut the script and store the result in a variable or temp tabel. I can not alter the script from the vedor, so I need the result into something i can manupilate. Another problem is that i dont know how many columns ther result will have. So it has to be flexible. Like one script have 5 columns and and the next script has 8 and so on.
exsample:
DECLARE #SQL nvarchar(MAX) = ( Select distinct script_details
from scripttable where .......)
This will give me the script I want to use, then I use
EXEC(#SQL)
to execute the script.
Then my problem is, the result from this I want into a variable or a table.
I have tryed to make a temp table like this:
create table #TmpTblSP (col1 varchar(MAX),col2 varchar(MAX),col3 varchar(MAX),col4 varchar(MAX),col5 varchar(MAX),col6 varchar(MAX),col7 varchar(MAX),col8 varchar(MAX),col9 varchar(MAX),col10 varchar(MAX),col11 varchar(MAX),col12 varchar(MAX))
then
insert into #TmpTblSP
EXEC(#SQL)
This gives me the following error:
Msg 213, Level 16, State 7, Line 1
Column name or number of supplied values does not match table definition.
But if i know how many columns there are and specify that into the insert it works.
insert into #TmpTblSP(Col1,Col2,Col3)
EXEC(#SQL)
But here you se my problem, I dont know how many columns there are in every script. I could make one script for every script the vendor has, but that will be alot, it's like 3000 scripts in that table and they change them often.
You could try something like:
DECLARE #SQL nvarchar(MAX) = (
Select distinct script_details
into #temptbl
from scripttable where .......
);
EXEC(#SQL);
If you don't know how many columns yous #sql gives then the only solution is use SELECT INTO. I use it in this way:
DECLARE #QRY nvarchar(MAX) = ( Select distinct script_details
from scripttable where .......)
SET #sql = 'SELECT * into ' + #temptablename + ' FROM (' + #qry + ') A '
It gives some flexibility
Remember that it is easy to check structure of the table created in this way in sys so you can build another #SQL from this info if needed.
I this as well recommended to split "SELECT INTO" to 2 parts
One is
SELECT INTO ......... WHERE 1=2
Second
INSERT INTO SELECT ......
Creation of table locks all DB. So it is good to create it as fast as possible and then insert into it.

Handling the data in an IN clause, with SQL parameters?

We all know that prepared statements are one of the best way of fending of SQL injection attacks. What is the best way of creating a prepared statement with an "IN" clause. Is there an easy way to do this with an unspecified number of values? Take the following query for example.
SELECT ID,Column1,Column2 FROM MyTable WHERE ID IN (1,2,3)
Currently I'm using a loop over my possible values to build up a string such as.
SELECT ID,Column1,Column2 FROM MyTable WHERE ID IN (#IDVAL_1,#IDVAL_2,#IDVAL_3)
Is it possible to use just pass an array as the value of the query paramter and use a query as follows?
SELECT ID,Column1,Column2 FROM MyTable WHERE ID IN (#IDArray)
In case it's important I'm working with SQL Server 2000, in VB.Net
Here you go - first create the following function...
Create Function [dbo].[SeparateValues]
(
#data VARCHAR(MAX),
#delimiter VARCHAR(10)
)
RETURNS #tbldata TABLE(col VARCHAR(10))
As
Begin
DECLARE #pos INT
DECLARE #prevpos INT
SET #pos = 1
SET #prevpos = 0
WHILE #pos > 0
BEGIN
SET #pos = CHARINDEX(#delimiter, #data, #prevpos+1)
if #pos > 0
INSERT INTO #tbldata(col) VALUES(LTRIM(RTRIM(SUBSTRING(#data, #prevpos+1, #pos-#prevpos-1))))
else
INSERT INTO #tbldata(col) VALUES(LTRIM(RTRIM(SUBSTRING(#data, #prevpos+1, len(#data)-#prevpos))))
SET #prevpos = #pos
End
RETURN
END
then use the following...
Declare #CommaSeparated varchar(50)
Set #CommaSeparated = '112,112,122'
SELECT ID,Column1,Column2 FROM MyTable WHERE ID IN (select col FROM [SeparateValues](#CommaSeparated, ','))
I think sql server 2008 will allow table functions.
UPDATE
You'll squeeze some extra performance using the following syntax...
SELECT ID,Column1,Column2 FROM MyTable
Cross Apply [SeparateValues](#CommaSeparated, ',') s
Where MyTable.id = s.col
Because the previous syntax causes SQL Server to run an extra "Sort" command using the "IN" clause. Plus - in my opinion it looks nicer :D!
If you would like to pass an array, you will need a function in sql that can turn that array into a sub-select.
These functions are very common, and most home grown systems take advantage of them.
Most commercial, or rather professional ORM's do ins by doing a bunch of variables, so if you have that working, I think that is the standard method.
You could create a temporary table TempTable with a single column VALUE and insert all IDs. Then you could do it with a subselect:
SELECT ID,Column1,Column2 FROM MyTable WHERE ID IN (SELECT VALUE FROM TempTable)
Go with the solution posted by digiguru. It's a great reusable solution and we use the same technique as well. New team members love it, as it saves time and keeps our stored procedures consistent. The solution also works well with SQL Reports, as the parameters passed to stored procedures to create the recordsets pass in varchar(8000). You just hook it up and go.
In SQL Server 2008, they finally got around to addressing this classic problem by adding a new "table" datatype. Apparently, that lets you pass in an array of values, which can be used in a sub-select to accomplish the same as an IN statement.
If you're using SQL Server 2008, then you might look into that.
Here's one technique I use
ALTER Procedure GetProductsBySearchString
#SearchString varchar(1000),
as
set nocount on
declare #sqlstring varchar(6000)
select #sqlstring = 'set nocount on
select a.productid, count(a.productid) as SumOf, sum(a.relevence) as CountOf
from productkeywords a
where rtrim(ltrim(a.term)) in (''' + Replace(#SearchString,' ', ''',''') + ''')
group by a.productid order by SumOf desc, CountOf desc'
exec(#sqlstring)