SQL Computed Column used in queries causing performance issues - sql

I have a table with columns A,B,C and another table with column username.
In column C i have a function getName(A).
getName(A) is roughly
CREATE FUNCTION [dbo].[GetName] (
#name VARCHAR(100)
)
RETURNS VARCHAR(100)
WITH SCHEMABINDING
AS
BEGIN
DECLARE #retval VARCHAR(100);
DECLARE #nextWord VARCHAR(100);
SET #retval = #name
IF EXISTS (Select 1 from someTable where username = SUSER_NAME())
BEGIN
SET #name = Replace(Replace(Replace(RTRIM(LTRIM(#name)),',',' ,'),'(','( '),')',' )')
SET #retval = LEFT(#name, 1);
WHILE CHARINDEX(' ', #name, 1) > 0
BEGIN
SET #name = LTRIM(RIGHT(#name, LEN(#name) - CHARINDEX(' ', #name, 1)));
IF CHARINDEX(' ', #name, 1) > 0
BEGIN
SET #nextWord = LTRIM(LEFT(#name, CHARINDEX(' ', #name, 1) - 1))
END
ELSE
BEGIN
SET #nextWord = #name
END
SET #retval += ' ' + CASE
WHEN #nextWord IN (
'List'
,'Of'
,'Different'
,'Words'
)
THEN #nextWord
WHEN ISNUMERIC(#nextWord) = 1
THEN #nextWord
WHEN ISDATE(#nextWord) = 1
THEN #nextWord
ELSE LEFT(#nextWord, 1)
END
END
END
RETURN #retval;
END
Now when I try to use column C in queries it basically times out. Trying to figure out if there is a way to make it faster. If the computed function for C is just referencing A it runs normal. but when its either choose A or choose the first letter of each word in A along with words in the allowed list it goes slow. If I make this function true always it goes relatively quick. I tried with the exists and still it is not fast.
Any advice would be greatly appreciated.
EDIT: I updated the function above. I should note, when the EXISTS query returns True it runs quickly, when it returns false it runs slow. That is the bigger dilemma that I am confused about.

This is, alas, a very reasonable function, because it is the only way to create a computed column that references another table.
The following code is safer:
BEGIN
DECLARE #retval VARCHAR(100);
IF EXISTS (SELECT 1 FROM someTable WHERE username = SUSER_NAME)
BEGIN
SET #retval = LEFT(#name, 1);
END
ELSE SET #retval = #name;
RETURN #retval;
END
The isnull() method is clever, but the original code would generate an error if there were multiple rows in the table that matched the where condition. Also, it requires considering all values in the table, rather than just the first. EXISTS knows to stop at the first matching row.
You want an index on sometable(username). You can do this either by creating a unique constraint or by creating the index explicitly.

Related

SQL Mass Sequence Update

I am working with SQL Server 2012, and I have an issue where a Customers Serial Number starts with leading zero's, (eg 0000001) which in turn causes formatting problems when they are exporting their data to third party interfaces via excel. We have tried to discuss making changes to excel, but the client is not willing.
What I need is an "easy" way to update all existing serial numbers on all tables which have a link to Serial Number (Currently 362 tables), to sequence beginning with 1 (eg 0000001 to 1000001).
Something like this should work.
CREATE PROCEDURE ttt
AS
BEGIN
SELECT DISTINCT TABLE_NAME, 0 AS isProcessed
into #t
FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMN_NAME = 'UserID';
DECLARE #iLooper INTEGER;
DECLARE #tableName VARCHAR(50);
DECLARE #dynamicQuery VARCHAR(1024);
SET #iLooper = 0;
WHILE (#iLooper = 0)
BEGIN
SELECT #tableName = TABLE_NAME FROM #t WHERE isProcessed = 0;
IF (##ROWCOUNT > 0)
BEGIN
UPDATE #t SET isProcessed = 1 WHERE TABLE_NAME = #tableName;
SET #dynamicQuery = 'UPDATE ' + #tableName + ' SET UserID = CONCAT(''1'', SUBSTRING(UserID, 2, 255)) WHERE SUBSTRING(Source, 1, 1) = ''O''';
exec(#dynamicQuery)
END
ELSE
BEGIN
SET #iLooper = 1;
END
END;
DROP TABLE #t;
END
GO
exec ttt
Note:
You might need to disable foreign keys and other keys if the column is used for such constraints

Attempting to import a csv into SQL Server that is lead by a random number of 0

Basically, I want to skip the random number of zeroes. The solution, I think, would be to create a dummy column, in the format file, that ends on the first nonzero value. However, after scouring the net, I have no idea how to do that.
edit: To clarify each row is preceded by the random number of 0s.
e.g.
000004412900000009982101201021042010
000000935000000009902005199322071993
I had to do something similar on a MLS import. Here is essentially what I did.
BULK INSERT TmpTable FROM [CSVPATH]
WITH (FIELDTERMINATOR = ',' , ROWTERMINATOR = '\n')
Then fix the data you need to fix (the leading zeros)
There a few ways you can do this, but I will list two.
UPDATE TmpTable Set TmpCol = CASE WHEN LEFT(TmpCol,1) = "0" THEN LTRIM(TmpCol, LEN(TmpCol) -1) ELSE TmpCol END
2. UPDATE TmpTable Set TmpCol = SUBSTRING(TmpCol, PATINDEX('%[^0]%', TmpCol + '.'), LEN(TmpCol))
I assume that the first step of the .csv file import process is to get the file into a database. Then the second step is to clean up the data.
To remove zeros after import, you can create a function:
create function [dbo].[RemoveLeadingZero](#strSrc as varchar(8000))
returns varchar(8000)
as
begin
declare #strResult varchar(8000)
set #strResult = ''
begin
set #strResult = Replace(Ltrim(Replace(#strSrc, '0', ' ')), ' ', '0')
end
return #strResult
end
select dbo.RemoveLeadingZero('000004412900000009982101201021042010')
update myTable
set columnA = dbo.RemoveLeadingZero(columnA)
Try this.
DECLARE #str varchar(100)='000004412900000009982101201021042010'
DECLARE #str1 varchar(100)='000000935000000009902005199322071993'
SELECT #str,RIGHT(#str,LEN(#str)-(PATINDEX('%'+'[1-9]'+'%',#str)-1)),#str1,RIGHT(#str1,LEN(#str1)-(PATINDEX('%'+'[1-9]'+'%',#str1)-1))
ALTER Function [dbo].[fn_CSVToTable] (#CSVList Varchar(5000))
Returns #Table Table (ColumnData Varchar(50))
As
Begin
If right(#CSVList, 1) <> ','
Select #CSVList = #CSVList + ','
Declare #Pos Smallint,
#OldPos Smallint
Select #Pos = 1,
#OldPos = 1
While #Pos < Len(#CSVList)
Begin
Select #Pos = CharIndex(',', #CSVList, #OldPos)
Insert into #Table
Select LTrim(RTrim(SubString(#CSVList, #OldPos, #Pos - #OldPos))) Col001
Select #OldPos = #Pos + 1
End
Return
End
First i'd import it as is to rows in a temporarily created column
I'd then create a user defined function or do this set based
That looped and removed the first char whist that char was "0"
TempCol = CASE WHEN LEFT(TempCol,1) = "0" THEN LTRIM(TempCol, LEN(TempCol) -1)
ESLE TempCol
END
LOOP WHILE #LEN changes
#LEN = SUM(LEN(TempCol))
SELECT #LEN

How to use a variable in an Sybase update statement

This query is to prove a concept that I will eventually use to locate all columns with a specific value and then create a name/value pair for export to JSON. But I'm stuck.
I query the list of all columns from the sql table. I would then like to go through the columns in Table1 row by row and update the values using the variable to construct the query. For example as it reads through the list if Col4 = "Old text" then I would like to set the value of Col 4 = "New Text"
DECLARE #c varCHAR(100)
DECLARE ReadData CURSOR
FOR SELECT cname FROM sys.syscolumns WHERE creator = 'dbserver' AND tname = 'Table1'
DECLARE #RowCount INT
SET #RowCount = (SELECT COUNT(cname) FROM sys.syscolumns WHERE creator = 'dbserver' AND tname = 'Table1')
OPEN ReadData
DECLARE #I INT // iterator
SET #I = 1 // initialize
WHILE (#I <= #RowCount)
BEGIN
FETCH NEXT ReadData INTO #c
INSERT INTO serverdb.Table2 (cname)VALUES(#c)// this works inserting all 100 columns in the cname column of Table 2
UPDATE serverdb.Table1 SET #c = 'New text' WHERE #c = 'Old text'// this fails with a syntax error. #c is not being interpreted for the query. Note: If I hard code the #c var (for testing)to a known column name, the query works as well
SET #I = #I + 1
END;
Why won't the update statement recognize the variable? What am I missing?
When you use varibale as mentioned below it is considered as a character string.
UPDATE serverdb.Table1 SET #c = 'New text' WHERE #c = 'Old text'
You need to create a dynamic query. use the execute method to execute your dynamic query
declare #sql varchar(999)
SELECT #sql = 'UPDATE serverdb.Table1 SET '+ #c + '= ''New text'' WHERE '+ #c+ ' = ''Old text'' '
execute(#sql)
Hope this helps

Declare cursor in dynamic sql

This code in a store procedure worked for years, now on the OPEN line I am getting A cursor with the name ... does not exist.
Did something change in sp_executesql that might have caused this to break?
Is there another way of doing this (the need for dynamic SQL is because the #Types param is passed in as a pre-formatted string for the IN clause?)
Select #Query = 'DECLARE cur_person CURSOR FOR
SELECT *
FROM MyTable
WHERE PersonID = #PersonnelID
AND Type in ' + #Types + ' <== formatted list for IN clause
EXEC sp_executesql #Query
OPEN cur_person <== get cursor doesn't exist error
In your example, that means that the cursor is defined locally.
You can define it globally with database option (CURSOR_DEFAULT) but that might not be a good idea.
Another thing that you can do is put all code in the query and execute it.
I don't know why it fails, but here's a split function that avoids the need for the dynamic query:
CREATE FUNCTION StringToTable
(
#p_list varchar(MAX),
#p_separator varchar(5) = ',',
#p_distinct bit = null
)
RETURNS
#ParsedList table
(
element varchar(500)
)
AS
BEGIN
DECLARE #v_element varchar(500), #Pos int, #v_insert_ind bit
SET #p_list = LTRIM(RTRIM(#p_list))+ #p_separator
SET #Pos = CHARINDEX(#p_separator, #p_list, 1)
IF REPLACE(#p_list, #p_separator, '') <> ''
BEGIN
WHILE #Pos > 0
BEGIN
SET #v_insert_ind = 1
SET #v_element = LTRIM(RTRIM(LEFT(#p_list, #Pos - 1)))
IF #v_element <> ''
BEGIN
IF (#p_distinct = 1)
AND (SELECT count(element) FROM #ParsedList WHERE element = #v_element) > 0
SET #v_insert_ind = 0
IF #v_insert_ind = 1
BEGIN
INSERT INTO #ParsedList (element)
VALUES (#v_element)
END
END
--
SET #p_list = RIGHT(#p_list, LEN(#p_list) - #Pos)
SET #Pos = CHARINDEX(#p_separator, #p_list, 1)
END
END
RETURN
END

T-SQL While Loop and concatenation

I have a SQL query that is supposed to pull out a record and concat each to a string, then output that string. The important part of the query is below.
DECLARE #counter int;
SET #counter = 1;
DECLARE #tempID varchar(50);
SET #tempID = '';
DECLARE #tempCat varchar(255);
SET #tempCat = '';
DECLARE #tempCatString varchar(5000);
SET #tempCatString = '';
WHILE #counter <= #tempCount
BEGIN
SET #tempID = (
SELECT [Val]
FROM #vals
WHERE [ID] = #counter);
SET #tempCat = (SELECT [Description] FROM [Categories] WHERE [ID] = #tempID);
print #tempCat;
SET #tempCatString = #tempCatString + '<br/>' + #tempCat;
SET #counter = #counter + 1;
END
When the script runs, #tempCatString outputs as null while #tempCat always outputs correctly. Is there some reason that concatenation won't work inside a While loop? That seems wrong, since incrementing #counter works perfectly. So is there something else I'm missing?
Looks like it should work but for somereason it seems to think #tempCatString is null which is why you are always getting a null value as nullconcatenated to anything else is still null. Suggest you try with COALESCE() on each of the variables to set them to " " if they are null.
this would be more efficient....
select #tempCatString = #tempCatString + Coalesce(Description,'') + '<br/>' from Categories...
select #fn
also look at concat_null_yields_null as an option to fix your concatenation issue, although I would avoid that route
I agree with keithwarren, but I would always be sure to add an ORDER BY clause to the query. You can then be sure as to exactly what order the values are being concatenated in.
Also, the COALESCE to replace the NULL value with '' will effectively yield blank rows. I don't know if you want them or not, but if not just filter in the WHERE clause instead...
Finally, you appear to have a temp table including the IDs you're interested in. This table can just be included in a JOIN to filter the source table...
DELCARE #output VARCHAR(8000)
SET #output = ''
SELECT
#output = #output + [Categories].Description + '<br/>'
FROM
Categories
INNER JOIN
#vals
ON #vals.val = [Categories].ID
WHERE
[Categories].Description IS NOT NULL
ORDER BY
[Categories].Description