Is there a better approach than a SQL Stored Procedure? - sql

I have created a stored procedure GetNotifications that returns all notifications for a specific user. Other developers are using this SP at many different places.
Now we have a need to implement a paging functionality so that we do not flood users with all notifications at the same time.
I cannot modify the existing SP since it's being used.
I can create another SP with paging functionality in it but I really don't want to do that since it requires a lot of repeating code and of course it would suck if we change the business logic for getting notifications in future.
This is something I can do: create a new SP that internally calls the GetNotifications and then implement the paging on the returned result-set. But then wouldn't it be unnecessary load on the server since the GetNotifications will return all the results regardless?
Can you think of a better way to approach this problem?

Modify the stored procedure with an optional parameter to return either the paged functionality, or all results (the default). This should give you the functionality you need without breaking existing code.

Have one stored proc that takes in 2 params: #PageNumber, #RowsPerPage
If 0 is passed in for both params return all rows, otherwise do paging.
Update your existing code to pass in 0/0 for the args, then your new code can pass in actual values if/when it wants paging.
As suggested in the comments, if you specify default values of 0/0 for the params you don't even need to update the existing code.

Related

t-sql append data to an existing dataset

I have a report writer that will allow me to query my db directly, however the query has to select from a SQL view. I have to develop a lot tracking solution but because of the logic involved, the only way I have been able to accomplish this so far is to hijack the SQL statement before it leaves the report writer and point it towards a function (instead of the view).
I need to develop a more user friendly way to accomplish this. My first though was to populate the view that the report writer sees with an item and lot number from one of my tables, call my function with the item and lot number and then somehow append the original view with usage and consumption transactions for that item/lot. Because of how the report writer is designed, the original view that returns just the item/lot must be the same object as the view that is eventually populated with the transactions.
Is there a way to use an alter view statement as part of a query? Is there a better way to accomplish this goal? I am a bit lost here.
Well not having the reputation to comment, and seeing that this is SQL Server could you do the following?:
SELECT st.*
, dbo.ufn_usage_and_consumption(st.item_number, st.lot_number)
from some_table st
Basically, you are calling both the view and FOR EACH ROW of the view, calling the SQL Server Function.
Please note that this is NOT OPTIMAL. You are essentially doing RBAR processing (Row by Agonizing Row) and calling the function for every row.
Really, I would see about creating a stored procedure, if your report writer supports that and pass parameters to call the query and pass results back.
I'm making the following assumption:
1) the data coming back from the function is a scalar (one value only), if it's not you can return it as a comma delimitted string
Don't know if that helps or not but good luck with your query!

Wrap SQL CONTAINS as an expression?

I have a question. I working on one site on Asp.Net, which uses some ORM. I need to use a couple of FullTextSearch functions, such as Contains. But when I try to generate it with that ORM, it generates such SQL code
SELECT
[Extent1].[ID] AS [ID],
[Extent1].[Name] AS [Name]
FROM [dbo].[SomeTable] AS [Extent1]
WHERE (Contains([Extent1].[Name], N'qq')) = 1
SQL can't parse it, because Contains doesn't return bit value. And unfortunately I can't modify SQL query generation process, but I can modify statements in it.
My question is - is it possible to wrap call of CONTAINS function to something else? I tried to create another function, that will SELECT with contains, but it requires specific table\column objects, and I don't want to do one function for each table..
EDIT
I can modify result type for that function in ORM. In previous sample result type is Bit. I can change it to int,nvarchar,etc. But as I understood there is no Boolean type in SQL, and I can't specify it.
Can't you put this in a stored procedure, and tell your ORM to call the stored procedure? Then you don't have to worry about the fact that your ORM only understands a subset of valid T-SQL.
I don't know that I believe the argument that requiring new stored procedures is a blocker. If you have to write a new CONTAINS expression in your ORM code, how much different is it to wrap that expression in a CREATE PROCEDURE statement in a different window? If you want to do this purely in ORM, then you're going to have to put pressure on the vendor to pick up the pace and start getting more complete coverage of the language they should fully support.

SQL Max parameters

I read here that the maximum number of parameters that can be passed to a Stored Procedure is 2100.
I am just curious what kind of system would require a SP with 2100 parameters to be passed, and couldn't one split that into multiple SPs?
I thought that maybe an SP that calls multiple SPs would require a lot of params to be passed, I just can't fathom writing out that disgusting EXEC statment.
The limit of procedures parameters predates both the XML data type and the table valued parameters, so back in those days there was simply no alternative. Having 2100 parameters on a procedure does not necessarily mean a human wrote it nor that a human will call it. It is quite common in generated code, like code created by tools and frameworks, to push such boundaries for any language, since the maintenance and refactoring of the generated code occur in the generating tool, not in the result code.
If you have a stored procedure using 2100 parameters you most likely have some sort of design problem.
Passing in a CSV list of values in a single parameter (and using a table value split function to turn those into rows), or using a table value parameter would be much easier than handling all of those input parameters.
I had a situation where I had to run something quite like the following:
SELECT
...
WHERE
ID IN (?,?,?,?...)
The list of parameters featured all entities the user had permission to use in the system (it was dynamically generated by some underlying framework). Turns out that the SGBD had a limitation on the number of parameters to be passed like this, and it was below 2100 (IIRC, it was Oracle and the maximum were 999 parameters in the IN list).
This would be a nice example of a rather long list of parameters to something that had to be a stored procedure (we had more than 999 and less than 2100 parameters to pass).
Don't know if the 999 constraint apply to sql server, but it is definately a situation where the long list would be useful...

Consolidated: SQL Pass comma separated values in SP for filtering

I'm here to share a consolidated analysis for the following scenario:
I've an 'Item' table and I've a search SP for it. I want to be able to search for multiple ItemCodes like:
- Table structure : Item(Id INT, ItemCode nvarchar(20))
- Filter query format: SELECT * FROM Item WHERE ItemCode IN ('xx','yy','zz')
I want to do this dynamically using stored procedure. I'll pass an #ItemCodes parameter which will have comma(',') separated values and the search shud be performed as above.
Well, I've already visited lot of posts\forums and here're some threads:
Dynamic SQL might be a least complex way but I don't want to consider it because of the parameters like performance,security (SQL-Injection, etc..)..
Also other approaches like XML, etc.. if they make things complex I can't use them.
And finally, no extra temp-table JOIN kind of performance hitting tricks please.
I've to manage the performance as well as the complexity.
T-SQL stored procedure that accepts multiple Id values
Passing an "in" list via stored procedure
I've reviewed the above two posts and gone thru some solutions provided, here're some limitations:
http://www.sommarskog.se/arrays-in-sql-2005.html
This will require me to 'declare' the parameter-type while passing it to the SP, it distorts the abstraction (I don't set type in any of my parameters because each of them is treated in a generic way)
http://www.sqlteam.com/article/sql-server-2008-table-valued-parameters
This is a structured approach but it increases complexity, required DB-structure level changes and its not abstract as above.
http://madprops.org/blog/splitting-text-into-words-in-sql-revisited/
Well, this seems to match-up with my old solutions. Here's what I did in the past -
I created an SQL function : [GetTableFromValues] (returns a temp table populated each item (one per row) from the comma separated #ItemCodes)
And, here's how I use it in my WHERE caluse filter in SP -
SELECT * FROM Item WHERE ItemCode in (SELECT * FROM[dbo].[GetTableFromValues](#ItemCodes))
This one is reusable and looks simple and short (comparatively of course). Anything I've missed or any expert with a better solution (obviously 'within' the limitations of the above mentioned points).
Thank you.
I think using dynamic T-SQL will be pragmatic in this scenario. If you are careful with the design, dynamic sql works like a charm. I have leveraged it in countless projects when it was the right fit. With that said let me address your two main concerns - performance and sql injection.
With regards to performance, read T-SQL reference on parameterized dynamic sql and sp_executesql (instead of sp_execute). A combination of parameterized sql and using sp_executesql will get you out of the woods on performance by ensuring that query plans are reused and sp_recompiles avoided! I have used dynamic sql even in real-time contexts and it works like a charm with these two items taken care of. For your satisfaction you can run a loop of million or so calls to the sp with and without the two optimizations, and use sql profiler to track sp_recompile events.
Now, about SQL-injection. This will be an issue if you use an incorrect user widget such as a textbox to allow the user to input the item codes. In that scenario it is possible that a hacker may write select statements and try to extract information about your system. You can write code to prevent this but I think going down that route is a trap. Instead consider using an appropriate user widget such as a listbox (depending on your frontend platform) that allows multiple selection. In this case the user will just select from a list of "presented items" and your code will generate the string containing the corresponding item codes. Basically you do not pass user text to the dynamic sql sp! You can even use slicker JQuery based selection widgets but the bottom line is that the user does not get to type any unacceptable text that hits your data layer.
Next, you just need a simple stored procedure on the database that takes a param for the itemcodes (for e.g. '''xyz''','''abc'''). Internally it should use sp_executesql with a parameterized dynamic query.
I hope this helps.
-Tabrez

Best Way to Handle SQL Parameters?

I essentially have a database layer that is totally isolated from any business logic. This means that whenever I get ready to commit some business data to a database, I have to pass all of the business properties into the data method's parameter. For example:
Public Function Commit(foo as object) as Boolean
This works fine, but when I get into commits and updates that take dozens of parameters, it can be a lot of typing. Not to mention that two of my methods--update and create--take the same parameters since they essentially do the same thing. What I'm wondering is, what would be an optimal solution for passing these parameters so that I don't have to change the parameters in both methods every time something changes as well as reduce my typing :) I've thought of a few possible solutions. One would be to move all the sql parameters to the class level of the data class and then store them in some sort of array that I set in the business layer. Any help would be useful!
So essentially you want to pass in a List of Parameters?
Why not redo your Commit function and have it accept a List of Parameter objects?
If your on SQL 2008 you can use merge to replace insert / update juggling. Sometimes called upsert.
You could create a struct to hold the parameter values.
Thanks for the responses, but I think I've figured out a better way for what I'm doing. It's similar to using the upsert, but what I do is have one method called Commit that looks for the given primary key. If the record is found in the database, then I execute an update command. If not, I do an insert command. Since the parameters are the same, you don't have to worry about changing them any.
For your problem I guess Iterator design pattern is the best solution. Pass in an Interface implementation say ICommitableValues you can pass in a key pair enumeration value like this. Keys are the column names and values are the column commitable values. A property is even dedicated as to return the table name in which to insert these value and or store procedures etc.
To save typing you can use declarative programming syntax (Attributes) to declare the commitable properties and a main class in middleware can use reflection to extract the values of these commitable properties and prepare a ICommitableEnumeration implementation from it.