I have 50 stored procedures that I need to add a new location to. Is there an alternative for writing my stored procedures in the following way? (where I copy the same select statement for each location)
IF #LOCATION = 'Canada'
BEGIN
SELECT location_id, location_description
INTO #tempAssetHistoryCANADA
FROM [SERVER20].[Shop_Canada].[dbo].[report_asset_history]
END
IF #LOCATION = 'USA'
BEGIN
SELECT location_id, location_description
INTO #tempAssetHistoryUSA
FROM [SERVER20].[Shop_USA].[dbo].[report_asset_history]
END
I have a select statement that fires if the #parameter = "x" And then the exact same select statement, but from a different data source with the same structure if #parameter = "y".
I'm wondering if there is a better way to write these stored procedures because in the future when I need to add a new location I will need to update all 50 stored procedures, and copy each statement and slightly alter it for the new locations data? I've researched around and haven't found anything helpful.
Thanks!
One possible way instead of using a dynamical query is to create a view:
CREATE VIEW dbo.Locations
AS
SELECT location_id, location_description, 'Canada' AS location
FROM [SERVER20].[Shop_Canada].[dbo].[report_asset_history]
UNION ALL
SELECT location_id, location_description, 'USA' AS location
FROM [SERVER20].[Shop_USA].[dbo].[report_asset_history]
And then using it:
SELECT location_id, location_description
INTO #tempAssetHistory
FROM [dbo].Locations
WHERE location = #LOCATION
If you have new tables [SERVER20].[Shop_XXX].[dbo].[report_asset_history] you will have to add them to your view.
Put the code that loads the temp table into table-valued function. Then call this function from all your other SPs that need the data
SELECT * INTO #TempAssetHistory FROM dbo.LoadTempAssetHistory(#Location)
:
: Use the data
:
The LoadTempAssetHistory SP would look something like (CODE NOT TESTED):
CREATE FUNCTION LoadTempAssetHistory
(
#LOCATION Varchar(50)
)
RETURNS TABLE
AS
RETURN
(
SELECT location_id, location_description
FROM [SERVER20].[Shop_Canada].[dbo].[report_asset_history]
WHERE #LOCATION='CANADA'
UNION
SELECT location_id, location_description
FROM [SERVER20].[Shop_USA].[dbo].[report_asset_history]
WHERE #LOCATION = 'USA'
)
Now you can amend the function when you have a new location (or decide to reorganise all your data) without needing to change 50 SPs.
First of all you don't need a different #temp table for each location. You're storing the same data columns in each. Secondly, you could store the "From" clause in a table based on location and then use dynamic sql to select into your temp table. I'll provide some code and example shortly.
DECLARE #fromClause VARCHAR(255)
DECLARE #sql VARCHAR(MAX)
--Explicitly create the temp table
CREATE TABLE #T (location_id int, location_description varchar(255) )
--Get the FROM clause
select #fromClause = fromClause from tblLocation WHERE location = #LOCATION
--Build the Dynamic SQL
SET #sql = 'SELECT location_id, location_description ' + #fromClause
--Insert into your temp table
INSERT INTO #T execute ( #sql )
--View the results
SELECT * FROM #T
Here is tblLocation definition
You could achieve this by having a #Temp table for all your assists and checking against a location table in your DB, then calling the variable when the SP is executed:
Call the SP:
SP_Some_SP 'Canada'
Inside the SP
Declare #Location Varchar(100)
Declare #Location_ID int = (Select Location_ID from [Location] where Location_Description = #Location)
CREATE TABLE #TempAssetHistory
(
location_ID int,
location_Description varchar(100)
)
If Exists(Select Location_Description from [Location] where Location_Description = #Location )
BEGIN
Insert INTO #TempAssetHistory
Values(#Location_ID,#Location)
END
ELSE
BEGIN
-- Do something
END
You can use one table to store all selected data, use something like this:
DECLARE #country VARCHAR(20) = 'USA'
CREATE TABLE #tempHistory (country varchar(20), location_id int, location_description varchar(20))
DECLARE #sql VARCHAR(max)
SET #sql = 'SELECT ''' + #country + ''' as country, location_id, location_description FROM [SERVER20].[Shop_' + #country + '].[dbo].[report_asset_history]'
INSERT INTO #tempHistory EXEC (#sql)
Or you can use more flexible solution to put the list of countries as a parameter:
CREATE PROCEDURE dbo.prepare_tempHistory(#listOfCountries VARCHAR(max))
AS
DECLARE #sql varchar(max)
SET #sql = ''
SELECT #sql = #sql + ' UNION ALL ' +
'SELECT ''' + val + ''' as country, location_id, location_description FROM [SERVER20].[Shop_' + val + '].[dbo].[report_asset_history]'
FROM dbo.fnSplit(#listOfCountries, ',')
SET #sql = RIGHT(#sql, len(#sql)-11)
INSERT INTO #tempHistory EXEC (#sql)
GO
but then you need a small function to split parameters into table:
CREATE FUNCTION dbo.fnSplit
(
#delimited nvarchar(max),
#delimiter nvarchar(5)
)
RETURNS #ret TABLE (val nvarchar(max))
AS
BEGIN
declare #xml xml
set #xml = N'<root><r>' + replace(#delimited,#delimiter,'</r><r>') + '</r></root>'
insert into #ret(val)
select r.value('.','varchar(max)') as item
from #xml.nodes('//root/r') as records(r)
RETURN
END
GO
Now you can prepare data on easy way:
CREATE TABLE #tempHistory (country varchar(20), location_id int, location_description varchar(20))
EXEC dbo.prepare_tempHistory 'USA,Canada,Mexico'
SELECT * FROM #tempHistory
for any additional country modify only the parameter
using suggestions from the answers and comments mentioning Dynamic SQL I came up with this little solution for myself. It's nicer then what Denis Rubashkin said because using his solution I would still have to copy the entire SQL query every time a new location is added. This way, I can just copy the 4 lines...
IF #LOCATION = 'CANADA'
BEGIN
SET #location_server = 'SHOP_Canada'
END
...from the IF statement at the beginning whenever I want to add a new location, not the whole SQL statement. It will replace the server name with the correct name in the parameter and append its name to the temp tables.
#LOCATION varchar(50),
#sqlCommand nvarchar(2000),
#location_server varchar(75)
IF #LOCATION = 'CANADA'
BEGIN
SET #location_server = 'SHOP_Canada'
END
IF #LOCATION = 'USA'
BEGIN
SET #location_server = 'SHOP_USA'
END
SET #sqlCommand = 'SELECT location_id, location_description
into ##MarineShopAssetExpensesLTD_'+#location_server+'
FROM [SERVER20].'+QUOTENAME(#location_server)+'.[dbo].[report_asset_history]
INNER JOIN [SERVER20].'+QUOTENAME(#location_server)+'.[dbo].imtbl_asset ON [report_asset_history].asset_id = imtbl_asset.id
GROUP BY location_id, location_description
EXECUTE sp_executesql #sqlCommand
As every SP is looking for different data, there is another, somewhat more drastic approach that you could use.
This approach requires that you put all the data SELECT statements for all 50 SPs into a single SP, say spDataLoad, which takes two parameters - a data set name and the location. The spDataLoad selects data based on which data set and which location are specified, and returns that requested data to the caller.
You will still need multiple select statements for each different combination of data set and location, but at least everything is in one single SP, and changes to the data will not affect all 50 SPs. If the tables and data for each location are the same, then you can subdivide the code into sections, one for each location, with identical code except for the database name corresponding to the location.
Using the code above as an example, if we choose 'AssetHistory' as the data set name then the code in your existing SPs would look like this:
:
:
CREATE TABLE #AssetHistory (
location_ID int,
location_Description varchar(100)
);
INSERT INTO #AssetHistory EXEC spDataLoad #DataSet='AssetHistory', #Location=#Location;
:
: use the data set
:
Now suppose you have another SP that requires the data set 'AssetDetails' then the code would be like this:
CREATE TABLE #AssetDetails (
:
: Specification for Asset Details table
:
);
INSERT INTO #AssetDetails EXEC spDataLoad #DataSet='AssetDetails', #Location=#Location;
:
: use the data set
:
The stored procedure spDataLoad, with multiple sections, for each of the locations, with separate selects based on the requested data set might look like this:
CREATE PROCEDURE spDataLoad
#DATASET varchar(20)
, #LOCATION Varchar(50)
AS
BEGIN
-- CANADA SECTION ------------------------------------
IF #LOCATION = 'CANADA'
BEGIN
IF #DATASET = 'AssetHistory'
SELECT location_id, location_description
FROM [SERVER20].[Shop_Canada].[dbo].[report_asset_history]
ELSE IF #DATASET = 'AssetDetails'
SELECT
: Asset details data
FROM [SERVER20].[Shop_Canada].[dbo].[report_asset_details]
ELSE IF #DATASET = '....'
:
: Etc, Etc for CANADA SECTION
END;
-- USA SECTION ------------------------------------
IF #LOCATION = 'USA'
BEGIN
IF #DATASET = 'AssetHistory'
SELECT location_id, location_description
FROM [SERVER20].[Shop_USA].[dbo].[report_asset_history]
ELSE IF #DATASET = 'AssetDetails'
SELECT
: Asset details data
FROM [SERVER20].[Shop_USA].[dbo].[report_asset_details]
ELSE IF #DATASET = '....'
:
: Etc, Etc for USA SECTION
END;
-- SOME OTHER SECTION ---------------------------
IF #LOCATION = 'SOME OTHER'
BEGIN
: Same logic
END
RETURN 0;
END
To manage performance you would probably need to add defaulted parameters for filtering which can be specified by the caller, and add WHERE clauses into the data set selections.
Related
I am stuck at a point.
I want to select based on the column entitytype if entitytype value is Booking or JOb then it will filter on its basis but if it is null or empty string('') then i want it to return all the rows containing jobs and bookings
create proc spproc
#entityType varchar(50)
as
begin
SELECT TOP 1000 [Id]
,[EntityId]
,[EntityType]
,[TenantId]
FROM [FutureTrakProd].[dbo].[Activities]
where TenantId=1 and EntityType= case #EntityType when 'BOOKING' then 'BOOKING'
when 'JOB' then 'JOB'
END
end
Any help would be appreciable
Thankyou
create proc spproc
#entityType varchar(50)
as
begin
SELECT TOP 1000 [Id]
,[EntityId]
,[EntityType]
,[TenantId]
FROM [FutureTrakProd].[dbo].[Activities]
where TenantId=1 and (#EntityType is null OR EntityType= #EntityType)
end
You don't need to use case expression you can do :
SELECT TOP 1000 [Id], [EntityId], [EntityType], [TenantId]
from [FutureTrakProd].[dbo].[Activities]
WHERE TenantId = 1 AND
(#EntityType IS NULL OR EntityType = #EntityType)
ORDER BY id; -- whatever order you want (asc/desc)
For your query procedure you need to state explicit ORDER BY clause otherwise TOP 1000 will give random Ids.
You don't need a CASE expression for this, you just need an OR. The following should put you on the right path:
WHERE TenantId=1
AND (EntityType = #EntityType OR #EntityType IS NULL)
Also, note it would also be wise to declare your parameter as NULLable:
CREATE PROC spproc #entityType varchar(50) = NULL
This means that someone can simply exclude the paramter, value than having to pass NULL (thus EXEc spproc; would work).
Finally, if you're going to have lots of NULLable parameters, then you're looking at a "catch-all" query; the solution would be different if that is the case. "Catch-all" queries can be notoriously slow.
You can execute a dynamic sql query.
Query
create proc spproc
#entityType varchar(50)
as
begin
declare #sql as nvarchar(max);
declare #condition as nvarchar(2000);
select = case when #entityType is not null then ' and [EntityType] = #entityType;' else ';' end;
select #sql = 'SELECT TOP 1000 [Id], [EntityId], [EntityType], [TenantId] FROM [FutureTrakProd].[dbo].[Activities] where TenantId = 1 ';
exec sp_executesql #sql,
N'#entityType nvarchar(1000)',
#entityType = #entityType
end
I'm Trying to create a stored procedure that will allow me to pick a start date and end date to get data from and to have a variable table name to write this data to.
I would like to pass in the two dates and the table name as parameters in the stored procedure. Here is that part I'm stuck on. I took out the stored procedure to try and get this working. this way I can see the lines the error is on.
DECLARE #MinDateWeek DATETIME
SELECT #MinDateWeek= DATEADD(WEEK, DATEDIFF(WEEK,0,GETDATE()), -7)
DECLARE #MaxDateWeek DATETIME
SELECT #MaxDateWeek= DATEADD(WEEK, DATEDIFF(WEEK,0,GETDATE()),0)
DECLARE #SQLCommand NVARCHAR(MAX)
SET #SQLCommand = ' --ERROR ON THIS LINE
-- Getting how much space is used in the present
DECLARE #Present Table (VMName NVARCHAR(50), UseSpace float(24))
INSERT INTO #Present
SELECT VMName
,SUM(CapacityGB-FreeSpaceGB)
FROM VMWareVMGuestDisk
GROUP BY VMName;
-- Getting how much space was used at the reference date
DECLARE #Past Table (VMName NVARCHAR(50), UseSpace float(24))
INSERT INTO #Past
SELECT VMName
,SUM(CapacityGB-FreeSpaceGB)
FROM VMWareVMGuestDisk
WHERE Cast([Date] AS VARCHAR(20))= '''+CAST(#MinDateWeek AS varchar(20))+'''
GROUP BY VMName;
--Inserting the average growth(GB/DAY) between the 2 dates in a Temporary Table
CREATE TABLE #TempWeek (VMName NVARCHAR(50)
, CapacityGB float(24)
, GrowthLastMonthGB float(24)
, FreeSpace FLOAT(24) )
INSERT INTO #TempWeek
SELECT DISTINCT V.VMName
,SUM(V.CapacityGB)
,SUM(((W1.UseSpace-W2.UseSpace)/(DATEDIFF(DAY,'''+CONVERT(VARCHAR(50),#MaxDateWeek)+''','''+CONVERT(VARCHAR (50),#MaxDateWeek)+'''))))
,SUM(V.FreeSpaceGb)
FROM VMWareVMGuestDisk AS V
LEFT JOIN
#Present AS W1
ON
V.VMName=W1.VMName
LEFT JOIN
#Past AS W2
ON
W1.VMName=W2.VMName
WHERE (CONVERT(VARCHAR(15),Date))='''+CONVERT(VARCHAR(50),#MaxDateWeek)+'''
GROUP BY V.VMName;
-- Checking if there is already data in the table
TRUNCATE TABLE SAN_Growth_Weekly;
--insert data in permanent table
INSERT INTO SAN_Growth_Weekly (VMName,Datacenter,Cluster,Company,DaysLeft,Growth, Capacity,FreeSpace,ReportDate)
SELECT DISTINCT
G.VMName
,V.Datacenter
,V.Cluster
,S.Company
, DaysLeft =
CASE
WHEN G.GrowthLastMonthGB IS NULL
THEN ''NO DATA''
WHEN (G.GrowthLastMonthGB)<=0
THEN ''UNKNOWN''
WHEN (G.FreeSpace/G.GrowthLastMonthGB)>0 AND (G.FreeSpace/G.GrowthLastMonthGB) <=30
THEN ''Less then 30 Days''
WHEN (G.FreeSpace/G.GrowthLastMonthGB)>30 AND (G.FreeSpace/G.GrowthLastMonthGB)<=60 THEN ''Less then 60 Days''
WHEN (G.FreeSpace/G.GrowthLastMonthGB)>60 AND (G.FreeSpace/G.GrowthLastMonthGB)<=90
THEN ''Less then 90 Days''
WHEN (G.FreeSpace/G.GrowthLastMonthGB)>90 AND (G.FreeSpace/G.GrowthLastMonthGB)<=180 THEN ''Less then 180 Days''
WHEN (G.FreeSpace/G.GrowthLastMonthGB)>180 AND (G.FreeSpace/G.GrowthLastMonthGB)<=365 THEN ''Less then 1 Year''
ELSE ''Over 1 Year''
END
,G.GrowthLastMonthGB
,G.CapacityGB
,G.FreeSpace
,'''+#MaxDateWeek+'''
FROM #tempWeek AS G
RIGHT JOIN VMWareVMGuestDisk AS V
ON V.VMName = G.VMName COLLATE SQL_Latin1_General_CP1_CI_AS
LEFT JOIN Server_Reference AS S
ON G.VMName COLLATE SQL_Latin1_General_CP1_CI_AS=S.[Asset Name]
WHERE '''+CONVERT(VARCHAR(50),#MaxDateWeek)+'''= CONVERT(VARCHAR(50),V.Date);'
EXEC sp_executesql #SQLCommand;
The error I get is
Conversion failed when converting date and/or time from character
string.
Thanks for the help.
Are you forgetting to enclose your Group By in the dynamic sql?:
ALTER PROCEDURE SAN_DISK_GROWTH
#MaxDateWeek DATETIME ,
#MinDateWeek DATETIME
AS
BEGIN
DECLARE #SQLCommand NVARCHAR(MAX)
SELECT #SQLCommand = '
DECLARE #Present Table (VMName NVARCHAR(50), UseSpace float(24))
INSERT INTO #Present
SELECT VMName
,SUM(CapacityGB - FreeSpaceGB)
FROM VMWareVMGuestDisk
WHERE CONVERT(VARCHAR(15),Date) = '''
+ CONVERT(VARCHAR(50), #MaxDateWeek) + ''' GROUP BY VMName;'
END
Try specifying your date/time values as parameters to the dynamic SQL query. In other words, instead of converting the dates to a varchar, use parameters in the query:
WHERE #MaxDateWeek = V.Date;
And pass the parameters on the call to sp_executesql like so:
EXEC sp_executesql #SQLCommand,
'#MindateWeek datetime, #MaxDateWeek datetime',
#MinDateWeek = #MinDateWeek,
#MaxDateWeek = #MaxDateWeek
Then you won't have to convert your dates to strings.
Note that this does not work for dynamic table names or column names. Those need to be concatenated together as part of the dynamic SQL itself.
For example, if you had a table name variable like this:
declare #TableName sysname
set #TableName = 'MyTable'
And you wanted the dynamic SQL to retrieve data from that table, then you would need to build your FROM statement like this:
set #SQLCommand = N'SELECT ...
FROM ' + #TableName + N' WHERE...
This build the name into the SQL like so:
'SELECT ... FROM MyTable WHERE...'
I'm trying to do a select from a table that will need to be in a variable. I'm working with tables that are dynamically created from an application. The table will be named CMDB_CI_XXX, where XXX will be an integer value based on a value in another table. The ultimate goal is to get the CI Name from the table.
I've tried passing the pieces that make up the table name to a function and string them together and then return the name value, but I'm not allowed to use an EXEC statement in a function.
This is what I want to execute to get the name value back:
Select [Name] from 'CMDB_CI_' + C.CI_TYPE_ID + Where CI_ID = c.CI_ID
This is the code in the SP that I'd like to use the function in to get the name value:
SELECT
CI_ID,
C.CI_TYPE_ID,
CI_CUSTOM_ID,
STATUS,
CI_TYPE_NAME,
--(Select [Name] from CMDB_CI_ + C.CI_TYPE_ID + Where CI_ID = c.CI_ID)
FROM [footprints].[dbo].[CMDB50_CI_COMMON] c
join [footprints].[dbo].[CMDB50_CI_TYPE] t
on c.CI_TYPE_ID = t.CI_TYPE_ID
where status <> 'retired'
order by CI_TYPE_NAME
I'm not sure what to do with this. Please help?
Thanks,
Jennifer
-- This part would be a SP parameter I expect
DECLARE #tableName varchar(100)
SET #tableName = 'CMDB_CI_508'
-- Main SP code
DECLARE #sqlStm VARCHAR(MAX)
SET #sqlStm = 'SELECT *
FROM '+ #tableName
EXEC (#sqlStm)
Fiddle http://sqlfiddle.com/#!3/436a7/7
First off, yes, I know it's a bad design. I didn't design it. It came with the problem tracking software that my company bought for our call center. So I gave up altogether on the approach I was going for and used a cursor to pull all the the names from the various tables into one temp table and then used said temp table to join to the original query.
ALTER Proc [dbo].[CI_CurrentItems]
As
Declare #CIType nvarchar(6)
Declare #Qry nvarchar(100)
/*
Create Table Temp_CI
( T_CI_ID int,
T_CI_Type_ID int,
T_Name nvarchar(400)
)
*/
Truncate Table Temp_CI
Declare CI_Cursor Cursor For
select distinct CI_TYPE_ID FROM [footprints].[dbo].[CMDB50_CI_COMMON]
where STATUS <> 'Retired'
Open CI_Cursor
Fetch Next from CI_Cursor into #CIType
While ##FETCH_STATUS = 0
BEGIN
Set #Qry = 'Select CI_ID, CI_Type_ID, Name from Footprints.dbo.CMDB50_CI_' + #CIType
Insert into Temp_CI Exec (#Qry)
Fetch Next from CI_Cursor into #CIType
END
Close CI_Cursor
Deallocate CI_Cursor
SELECT CI_ID,
C.CI_TYPE_ID,
CI_CUSTOM_ID,
STATUS,
CI_TYPE_NAME,
T_Name
FROM [footprints].[dbo].[CMDB50_CI_COMMON] c
JOIN [footprints].[dbo].[CMDB50_CI_TYPE] t
ON c.CI_TYPE_ID = t.CI_TYPE_ID
JOIN Temp_CI tc
ON c.CI_ID = tc.T_CI_ID
AND t.CI_TYPE_ID = tc.T_CI_TYPE_ID
WHERE STATUS <> 'retired'
ORDER BY CI_TYPE_NAME
Have a Table with the CSV Values in the columns as below
ID Name text
1 SID,DOB 123,12/01/1990
2 City,State,Zip NewYork,NewYork,01234
3 SID,DOB 456,12/21/1990
What is need to get is 2 tables in this scenario as out put with the corresponding values
ID SID DOB
1 123 12/01/1990
3 456 12/21/1990
ID City State Zip
2 NewYork NewYork 01234
Is there any way of achieving it using a Cursor or any other method in SQL server?
There are several ways that this can be done. One way that I would suggest would be to split the data from the comma separated list into multiple rows.
Since you are using SQL Server, you could implement a recursive CTE to split the data, then apply a PIVOT function to create the columns that you want.
;with cte (id, NameItem, Name, textItem, text) as
(
select id,
cast(left(Name, charindex(',',Name+',')-1) as varchar(50)) NameItem,
stuff(Name, 1, charindex(',',Name+','), '') Name,
cast(left(text, charindex(',',text+',')-1) as varchar(50)) textItem,
stuff(text, 1, charindex(',',text+','), '') text
from yt
union all
select id,
cast(left(Name, charindex(',',Name+',')-1) as varchar(50)) NameItem,
stuff(Name, 1, charindex(',',Name+','), '') Name,
cast(left(text, charindex(',',text+',')-1) as varchar(50)) textItem,
stuff(text, 1, charindex(',',text+','), '') text
from cte
where Name > ''
and text > ''
)
select id, SID, DOB
into table1
from
(
select id, nameitem, textitem
from cte
where nameitem in ('SID', 'DOB')
) d
pivot
(
max(textitem)
for nameitem in (SID, DOB)
) piv;
See SQL Fiddle with Demo. The recursive version will work great but if you have a large dataset, you could have some performance issues so you could also use a user defined function to split the data:
create FUNCTION [dbo].[Split](#String1 varchar(MAX), #String2 varchar(MAX), #Delimiter char(1))
returns #temptable TABLE (colName varchar(MAX), colValue varchar(max))
as
begin
declare #idx1 int
declare #slice1 varchar(8000)
declare #idx2 int
declare #slice2 varchar(8000)
select #idx1 = 1
if len(#String1)<1 or #String1 is null return
while #idx1 != 0
begin
set #idx1 = charindex(#Delimiter,#String1)
set #idx2 = charindex(#Delimiter,#String2)
if #idx1 !=0
begin
set #slice1 = left(#String1,#idx1 - 1)
set #slice2 = left(#String2,#idx2 - 1)
end
else
begin
set #slice1 = #String1
set #slice2 = #String2
end
if(len(#slice1)>0)
insert into #temptable(colName, colValue) values(#slice1, #slice2)
set #String1 = right(#String1,len(#String1) - #idx1)
set #String2 = right(#String2,len(#String2) - #idx2)
if len(#String1) = 0 break
end
return
end;
Then you can use a CROSS APPLY to get the result for each row:
select id, SID, DOB
into table1
from
(
select t.id,
c.colname,
c.colvalue
from yt t
cross apply dbo.split(t.name, t.text, ',') c
where c.colname in ('SID', 'DOB')
) src
pivot
(
max(colvalue)
for colname in (SID, DOB)
) piv;
See SQL Fiddle with Demo
You'd need to approach this as a multi-step ETL project. I'd probably start with exporting the two types of rows into a couple staging tables. So, for example:
select * from yourtable /* rows that start with a number */
where substring(text,1,1) in
('0','1','2','3','4','5','6','7','8','9')
select * from yourtable /* rows that don't start with a number */
where substring(text,1,1)
not in ('0','1','2','3','4','5','6','7','8','9')
/* or simply this to follow your example explicitly */
select * from yourtable where name like 'sid%'
select * from yourtable where name like 'city%'
Once you get the two types separated then you can split them out with one of the already written split functions found readily out on the interweb.
Aaron Bertrand (who is on here often) has written up a great post on the variety of ways to split comma delimted strings using SQL. Each of the methods are compared and contrasted here.
http://www.sqlperformance.com/2012/07/t-sql-queries/split-strings
If your row count is minimal (under 50k let's say) and it's going to be a one time operation than pick the easiest way and don't worry too much about all the performance numbers.
If you have a ton of rows or this is an ETL process that will run all the time then you'll really want to pay attention to that stuff.
A simple solution using cursors to build temporary tables. This has the limitation of making all columns VARCHAR and would be slow for large amounts of data.
--** Set up example data
DECLARE #Source TABLE (ID INT, Name VARCHAR(50), [text] VARCHAR(200));
INSERT INTO #Source
(ID, Name, [text])
VALUES (1, 'SID,DOB', '123,12/01/1990')
, (2, 'City,State,Zip', 'NewYork,NewYork,01234')
, (3, 'SID,DOB', '456,12/21/1990');
--** Declare variables
DECLARE #Name VARCHAR(200) = '';
DECLARE #Text VARCHAR(1000) = '';
DECLARE #SQL VARCHAR(MAX);
--** Set up cursor for the tables
DECLARE cursor_table CURSOR FAST_FORWARD READ_ONLY FOR
SELECT s.Name
FROM #Source AS s
GROUP BY Name;
OPEN cursor_table
FETCH NEXT FROM cursor_table INTO #Name;
WHILE ##FETCH_STATUS = 0
BEGIN
--** Dynamically create a temp table with the specified columns
SET #SQL = 'CREATE TABLE ##Table (' + REPLACE(#Name, ',', ' VARCHAR(50),') + ' VARCHAR(50));';
EXEC(#SQL);
--** Set up cursor to insert the rows
DECLARE row_cursor CURSOR FAST_FORWARD READ_ONLY FOR
SELECT s.Text
FROM #Source AS s
WHERE Name = #Name;
OPEN row_cursor;
FETCH NEXT FROM row_cursor INTO #Text;
WHILE ##FETCH_STATUS = 0
BEGIN
--** Dynamically insert the row
SELECT #SQL = 'INSERT INTO ##Table VALUES (''' + REPLACE(#Text, ',', ''',''') + ''');';
EXEC(#SQL);
FETCH NEXT FROM row_cursor INTO #Text;
END
--** Display the table
SELECT *
FROM ##Table;
--** Housekeeping
CLOSE row_cursor;
DEALLOCATE row_cursor;
DROP TABLE ##Table;
FETCH NEXT FROM cursor_table INTO #Name;
END
CLOSE cursor_table;
DEALLOCATE cursor_table;
I am getting this error:
Msg 195, Level 15, State 10, Line 1
'fnParseName' is not a recognized built-in function name.
On this query:
SELECT fnParseName(DOCTORFIRSTNAME+' ' +DOCTORLASTNAME)
FROM [PracticeandPhysician]
Here's the code for fnParseName
create FUNCTION [dbo].[fnParseName]
(#FullName NVARCHAR(128))
RETURNS #FullNameParts TABLE (FirstName NVARCHAR(128),
Middle NVARCHAR(128),
LastName NVARCHAR(128))
AS
BEGIN
... function body that populates #FullNameParts ...
RETURN
END
Why am I getting this error?
It's a table-valued function. So you probably meant:
SELECT p.DOCTORFISTNAME, p.DOCTORLASTNAME, t.FirstName, t.Middle, t.LastName
FROM dbo.[PracticeandPhysician] AS p
CROSS APPLY dbo.fnParseName(p.DOCTORFIRSTNAME + ' ' + p.DOCTORLASTNAME);
Note that you can't say:
SELECT dbo.TableValueFunction('foo');
Any more than you could say:
SELECT dbo.Table;
--or
SELECT dbo.View;
You can, however, say:
SELECT * FROM dbo.fnParseName('foo bar');
--or
SELECT FirstName, Middle, LastName FROM dbo.fnParseName('foo bar');
(Not that I have validated that your function does what you think, or does so efficiently.)
Please always use the dbo. prefix as others have suggested.
You always have to prefix SQL function calls with the schema name dbo. or the schema name for that function (dbo is the default schema).
SELECT dbo.fnParseName(--etc
UDFs/Functions need to be prefixed with the schema name (most likely "dbo"). Change the call to
SELECT
dbo.fnParseName(DOCTORFIRSTNAME + ' ' + DOCTORLASTNAME)
FROM
[PracticeandPhysician]
The problem you have is similar to what I encountered too. Scalar function and Table inline functions are quite different in terms of implementation. See below for the diiferent
Create function udfCountry
(
#CountryName varchar(50)
)
returns varchar(2)
as
BEGIN
Declare #CountryID varchar(2),
#Result varchar(2)
Select #CountryID = Country from
dbo.GeoIPCountryNames where CountryName = #CountryName
set #Result = isNull(#CountryID, 'NA')
if #Result = 'NA'
set #Result = 'SD'
return #Result
End
//Implementation
select dbo.[udfCountry]('Nigeria')
// sample result
NG
// Inline table function sample
Create FUNCTION ConditionEvaluation
(
#CountrySearch varchar(50)
)
returns #CountryTable table
(
Country varchar(2),
CountryName varchar(50)
)
as
Begin
Insert into #CountryTable(Country, CountryName)
Select Country, CountryName from GeoIPCountryNames
where Country like '%'+#CountrySearch+'%'
return
end
//Implementation sample
Declare #CountrySearch varchar(50)
set #CountrySearch='a'
select * from ConditionEvaluation(#CountrySearch)
the parttern of implementating scalar is quite different inline table. I hope this helps
If you want to assign the value returned by tfn in a variable of stored procedure, you can do it this way:
select #my_local_variable_in_procedure = column_name_returned_from_tfn from dbo.my_inline_tfn (#tfn_parameter)