I am building a trigger to do some calculations for me. However, I am just writing the commands to see if they work for right now and produce error handling. So I have written the following code.
DECLARE #strTotalAssets varchar(8000)
SELECT #strTotalAssets = (SELECT ProjectOther2 FROM
Project WHERE ProjectID = '00000:')
SELECT
CASE
WHEN RIGHT(value, 1) = 'M' THEN LEFT(value, (LEN(value)-1)) * 1000000
WHEN RIGHT(value, 1) = 'T' THEN LEFT(value, (LEN(value)-1)) * 1000
WHEN RIGHT(value, 1) > 0 THEN RETURN 'Error: You forgot to put a mutliplier Value'
ELSE 'Error'
END
FROM Split(#strTotalAssets, '|')
The problem I have is that I do not know how to exit the script and return an error. Forgive my ignorance but just starting out at a jr DBA. Hopefully from the code you can see what I am trying to do. Basically if the user forgot to put a letter value which represents a multiplier of Thousands or Millions which means the value returns only an integer then produce an error and tell the user they forgot to put a value.
As this is a trigger, returning data in the true sense isnt an option; for something thats treated in the same way as say a key violation error and returned to the client in the same fashion you can RAISERROR:
RAISERROR('You forgot to put a mutliplier Value', 15, 121)
You're looking for GOTO Error. Here's a quick primer. I only skimmed it quickly, but it looked pretty good for what you need.
Related
I am using SSIS to transform a raw data row into a transaction. Everything was going well until I added logic for a new field called "SplitPercentage" to the SQL command. The new field simply converts the value to a decimal, for example 02887 would transform into 0.2887.
The new logic works as intended, but now it takes 8 hours to run instead of 5 minutes.
Please see entire original code vs new code here:
Greatly appreciate any help!
New logic resulting in poor performance:
IF TRIM(SUBSTRING(#line, 293, 1)) = 1
BEGIN
SET #SplitPercentage = 1
END
ELSE
BEGIN
SET #SplitPercentage = CAST(''.'' + TRIM(SUBSTRING(#line, 294, 4)) AS decimal(7, 4))
END
While your current code is not ideal, I don't see anything in your new expression (SUBSTRING(), TRIM(), concatenation, CAST) that would account for such a drastic performance hit. I suspect the cause lies elsewhere.
However, I believe your expression can be simplified to eliminate the IF. Given a 5-character field "nnnnn" that you wish to treat as a decimal n.nnnn, you should be able to do this in a single statement using STUFF() to inject the decimal point:
#SplitPercentage = CAST(STUFF(SUBSTRING(#line, 293, 5), 2, 0, '.') AS decimal(7, 4))
The STUFF() injects the decimal point at position 2 (replacing 0 characters). I see no need for the TRIM().
(You would to double up the quotes for use within your Exec ('...') statement.)
Please try to change IF/ELSE block of code as follows:
SET #SplitPercentage = IIF(TRIM(SUBSTRING(#line, 293, 1)) = ''1''
, 1.0000
, CAST(''.'' + TRIM(SUBSTRING(#line, 294, 4)) AS DECIMAL(7, 4)));
A challenge you've run into is "I have a huge dynamic query process that I cannot debug." When I run into these issues, I try to break the problem down into smaller, solvable, set based options.
Reading that wall of code, my psuedocode would be something like
For all the data in Inbound_Transaction_Source by a given Source value (#SourceName)
Do all this data validation, type correction and cleanup by slicing out the current line into pieces
You can then lose the row-based approach by slicing your data up. I favor using CROSS APPLY at this point in my life but a CTE, Dervied Table, whatever makes sense in your head is valid.
Why I favor this approach though, is you can see what you're building, test it, and then modify it without worrying you're going to upset a house of cards.
-- Column ordinal declaration and definition is offsite
SELECT
*
FROM
[dbo].[Inbound_Transaction_Source] AS ITS
CROSS APPLY
(
SELECT
CurrentAgentNo = SUBSTRING(ITS.line, #CurrentAgentStartColumn, 10)
, CurrentCompMemo = SUBSTRING(ITS.line, #CompMemoStartColumn + #Multiplier, 1)
, CurrentCommAmount = SUBSTRING(ITS.line, #CommAmountStartColumn + #Multiplier, 9)
, CurrentAnnCommAmount = SUBSTRING(ITS.line, #AnnCommAmountStartColumn + #Multiplier, 9)
, CurrentRetainedCommAmount = SUBSTRING(ITS.line, #RetainedCommAmountStartColumn + #Multiplier, 9)
, CurrentRetainedSwitch = SUBSTRING(ITS.line, #RetainedSwitchStartColumn + #Multiplier, 9)
-- etc
-- A sample of your business logic
, TransactionSourceSystemCode = SUBSTRING(ITS.line, 308, 3)
)NamedCols
CROSS APPLY
(
SELECT
-- There's some business rules to be had here for first year processing
-- Something special with position 102
SUBSTRING(ITS.line,102 , 1) AS SeniorityBit
-- If department code? is 0079, we have special rules
, TRIM(SUBSTRING(ITS.line,141, 4)) As DepartmentCode
)BR0
CROSS APPLY
(
SELECT
CASE
WHEN NamedCols.TransactionSourceSystemCode in ('LVV','UIV','LMV') THEN
CASE WHEN BR0.SenorityBit = '0' THEN '1' ELSE '0' END
WHEN NamedCols.TransactionSourceSystemCode in ('CMP','FAL') AND BR0.DepartmentCode ='0079' THEN
CASE WHEN BR0.SenorityBit = '1' THEN '0' ELSE '1' END
WHEN NamedCols.TransactionSourceSystemCode in ('UIA','LMA','RIA') AND BR0.SenorityBit > '1' THEN
'1'
WHEN NamedCols.TransactionSourceSystemCode in ('FAL') THEN
'1'
ELSE '0'
END
)FY(IsFirstYear)
WHERE Source = #SourceName
ORDER BY Id;
Why did processing take increase from 5 minutes to 8 hours?
It likely had nothing to do with the change to the dynamic SQL. When an SSIS package run is "taking forever" relative to normal, then preferably while it's still running, look at your sources and destinations and make note of what it happening as it's likely one of the two.
A cursor complicates your life and is not needed once you start thinking in sets but it's unlikely to be the source of the performance problems given than you have a solid baseline of what normal is. Plus, this query is a single table query with a single filter.
Your SSIS package's data flow is probably chip shot Source to Destination Extract and Load or Slurp and Burp with no intervening transformation (as the logic is all in the stored procedure). If that's the case, then the only two possible performance points of contention are the source and destination. Since the source appears trivial, then it's likely that some other process had the destination tied up for those 8 hours. Had you run something like sp_whoisactive on the source and destination, you can identify the process that is blocking your run.
Here is an issue that seems like it should be a simple solve but I have been working on it some time and cannot figure out why I cannot seem to combine CASE in one area of the query and IF in another.
Does anyone see what is going on here? I have an old data set that needs to be converted to work with new tables. The data is pulled from WDDX and put in a temp table. That's all working properly.
Here is where I am running into trouble. I need to extract a value to a new column called DetailValue when XXXX appears, and in such cases, the value appearing after XXXX is a number that belongs to DetailValue, then it is followed by _, and in such case, the number appearing after _ belongs in RiskValue.. Also, when ZZZZ appears, the following character belongs in DetailValue and it is always the last character.
When it's simply CASE, everything works fine, but when I add IF to grab a value, it tells me:
"Incorrect syntax near the keyword 'IF'. Msg 156, Level 15, State 1, Procedure OldSysDataConv Incorrect syntax near the keyword 'THEN'.
Code is:
SELECT VarName,
CASE
WHEN (CHARINDEX('XXXX', VarName) > 0 and SUBSTRING(VarName, CHARINDEX('XXXX', VarName), len(VarName)) like '%XXXX%') or SUBSTRING(VarName, CHARINDEX('ZZZZ', VarName), len(VarName)) like '%ZZZZ%' then
left(replace(left(replace(VarName, 'XXXX', ''), len(VarName)-4), 'ZZZZ', ''), (len(VarName)-5))
else
null
END as DetailName,
IF CHARINDEX('ZZZZ', VarName) > 0 THEN
right(VarName, 1)
END
as DetailValue,
There is no such thing as IF ... THEN ... END expression in pure SQL. This is a procedural statement that is used in stored procedure.
You want either IIF(), which is T-SQL specific:
IIF(CHARINDEX('ZZZZ', VarName) > 0, right(VarName, 1), null) as DetailValue
Or the more standard CASE :
CASE WHEN CHARINDEX('ZZZZ', VarName) > 0 THEN right(VarName, 1) END as DetailValue
I solved my own problem by simplifying my CASE statement. Some basic logic had simply slipped by me during a long day. ;)
Answer:
CASE
WHEN CHARINDEX('XXXX', VarName) > 0 AND CHARINDEX('ZZZZ', VarName) <> 0 THEN
substring(VarName, (CHARINDEX('_', VarName)-1), len(VarName))
ELSE
right(VarName, 1)
END
as DetailValue,
I am self taught in T-SQL, so I am sure that I can gain efficiency in my code writing, so any pointers are welcomed, even if unrelated to this specific problem.
I am having a problem during a nightly routine I wrote. The database program that is creating the initial data is out of my control and is loosely written, so I have bad data that can blow up my script from time to time. I am looking for assistance in adding error checking into my script so I lose one record instead of the whole thing blowing up.
The code looks like this:
SELECT convert(bigint,(SUBSTRING(pin, 1, 2)+ SUBSTRING(pin, 3, 4)+ SUBSTRING(pin, 7, 5) + SUBSTRING(pin, 13, 3))) AS PARCEL, taxyear, subdivisn, township, propclass, paddress1, paddress2, pcity
INTO [ASSESS].[dbo].[vpams_temp]
FROM [ASSESS].[dbo].[Property]
WHERE parcelstat='F'
GO
The problem is in the first part of this where the concatenation occurs. I am attempting to convert this string (11-1111-11111.000) into this number (11111111111000). If they put their data in correctly, there is punctuation in exactly the correct spots and numbers in the right spots. If they make a mistake, then I end up with punctuation in the wrong spots and it creates a string that cannot be converted into a number.
How about simply replacing "-" and "." with "" before CONVERT to BIGINT?
To do that you would simply replace part of your code with
SELECT CONVERT(BIGINT,REPLACE(REPLACE(pin,"-",""), ".","")) AS PARCEL, ...
Hope it helps.
First, I would use replace() (twice). Second, I would use try_convert():
SELECT try_convert(bigint,
replace(replace(pin, '-', ''), '.', '')
) as PARCEL,
taxyear, subdivisn, township, propclass, paddress1, paddress2, pcity
INTO [ASSESS].[dbo].[vpams_temp]
FROM [ASSESS].[dbo].[Property]
WHERE parcelstat = 'F' ;
You might want to check if there are other characters in the value:
select pin
from [ASSESS].[dbo].[Property]
where pin like '%[^-0-9.]%';
Why not just:
select cast(replace(replace('11-1111-11111.000','-',''),'.','') as bigint)
simply, use the next code:-
declare #var varchar(100)
set #var = '11-1111-11111.000'
select convert(bigint, replace(replace(#var,'-',''),'.',''))
Result:-
11111111111000
I want to create a stored procedure in sql server which will allow me to skip the row and move on to the next row when an error is encountered. For example, when i pass in an input of 'BOZ3C 51' it works, but fails with the error - 'Invalid length parameter passed to the LEFT or SUBSTRING function.' when it encounters an input of 'C Z3C'
BEGIN TRY
select distinct LEFT(SUBSTRING(ticker,1,CHARINDEX(' ',ticker) -1),len(SUBSTRING(ticker,1,CHARINDEX(' ',ticker) -1))-3)as CLASS
from SECURITY
SET #RETMSG = 'SUCCESS'
END TRY
BEGIN CATCH
SET #RETMSG = 'SecClass ERRNUM: ' + CONVERT(VARCHAR, ERROR_NUMBER()) + ' SecClass ERRMSG: ' + ERROR_MESSAGE();
print #RETMSG
END CATCH;
How can i handle such situations? Thanks for the pointers.
The only way to guarantee that you avoid the error in SQL Server is to use the case statement. SQL Server reserves the right to rearrange operations . . . and this means that the calculation in the SELECT might happen before the filtering in the WHERE.
select distinct (case when len(SUBSTRING(tinker,1,CHARINDEX(' ',tinker) -1)) >= 3
then LEFT(SUBSTRING(ticker,1,CHARINDEX(' ',ticker) -1),len(SUBSTRING(ticker,1,CHARINDEX(' ',ticker) -1))-3)as CLASS
end) -- if not appropriate format, then return `NULL`
from SECURITY;
case is in general guaranteed to evaluate the when clauses in order and before the then (there are exceptions involving aggregated functions, but they do not apply in this case).
You can add a where clause that filters out rows that will fail.
where len(SUBSTRING(tinker,1,CHARINDEX(' ',tinker) -1)) - 3 >= 0
The best solution would to make everything more programtic. The issue is that you are checking how many characters are before a space, then subtracting 3. If you can calculate a number instead of substracting 3 every time and avoid going below 0, that would solve the errors.
I've got a report that has been in use quite a while - in fact, the company's invoice system rests in a large part upon this report (Disclaimer: I didn't write it). The filtering is based upon whether a field of type VarChar(50) falls between two numeric values passed in by the user.
The problem is that the field the data is being filtered on now not only has simple non-numeric values such as '/A', 'TEST' and a slew of other non-numeric data, but also has numeric values that seem to be defying any type of numeric conversion I can think of.
The following (simplified) test query demonstrates the failure:
Declare #StartSummary Int,
#EndSummary Int
Select #StartSummary = 166285,
#EndSummary = 166289
Select SummaryInvoice
From Invoice
Where IsNull(SummaryInvoice, '') <> ''
And IsNumeric(SummaryInvoice) = 1
And Convert(int, SummaryInvoice) Between #StartSummary And #EndSummary
I've also attempted conversions using bigint, real and float and all give me similar errors:
Msg 8115, Level 16, State 2, Line 7
Arithmetic overflow error converting
expression to data type int.
I've tried other larger numeric datatypes such as BigInt with the same error. I've also tried using sub-queries to sidestep the conversion issue by only extracting fields that have numeric data and then converting those in the wrapper query, but then I get other errors which are all variations on a theme indicating that the value stored in the SummaryInvoice field can't be converted to the relevant data type.
Short of extracting only those records with numeric SummaryInvoice fields to a temporary table and then querying against the temporary table, is there any one-step solution that would solve this problem?
Edit: Here's the field data that I suspect is causing the problem:
SummaryInvoice
11111111111111111111111111
IsNumeric states that this field is numeric - which it is. But attempting to convert it to BigInt causes an arithmetic overflow. Any ideas? It doesn't appear to be an isolated incident, there seems to have been a number of records populated with data that causes this issue.
It seems that you are gonna have problems with the ISNUMERIC function, since it returns 1 if can be cast to any number type (including ., ,, e0, etc). If you have numbers longer than 2^63-1, you can use DECIMAL or NUMERIC. I'm not sure if you can use PATINDEX to perform an regex look on SummaryInvoice, but if you can, then you should try this:
SELECT SummaryInvoice
FROM Invoice
WHERE ISNULL(SummaryInvoice, '') <> ''
AND CASE WHEN PATINDEX('%[^0-9]%',SummaryInvoice) > 0 THEN CONVERT(DECIMAL(30,0), SummaryInvoice) ELSE -1 END
BETWEEN #StartSummary And #EndSummary
You can't guarantee what order the WHERE clause filters will be applied.
One ugly option to decouple inner and outer.
SELECT
*
FROM
(
Select TOP 2000000000
SummaryInvoice
From Invoice
Where IsNull(SummaryInvoice, '') <> ''
And IsNumeric(SummaryInvoice) = 1
ORDER BY SummaryInvoice
) foo
WHERE
Convert(int, SummaryInvoice) Between #StartSummary And #EndSummary
Another using CASE
Select SummaryInvoice
From Invoice
Where IsNull(SummaryInvoice, '') <> ''
And
CASE WHEN IsNumeric(SummaryInvoice) = 1 THEN Convert(int, SummaryInvoice) ELSE -1 END
Between #StartSummary And #EndSummary
YMMV
Edit: after question update
use decimal(38,0) not int
Change ISNUMERIC(SummaryInvoice) to ISNUMERIC(SummaryInvoice + '0e0')
AND with IsNumeric(SummaryInvoice) = 1, will not short circuit in SQL Server.
But may be you can use
AND (CASE IsNumeric(SummaryInvoice) = 1 THEN Convert(int, SummaryInvoice) ELSE 0 END)
Between #StartSummary And #EndSummary
Your first issue is to fix your database structure so bad data cannot get into the field. You are putting a band-aid on a wound that needs stitches and wondering why it doesn't heal.
Database refactoring is not fun, but it needs to be done when there is a data integrity problem. I assume you aren't really invoicing someone for 11,111,111,111,111,111,111,111,111 or 'test'. So don't allow those values to ever get entered (if you can't change the structure to the correct data type, consider a trigger to prevent bad data from going in) and delete the ones you do have that are bad.