NHibernate handling mutliple resultsets from a sp call - nhibernate

I'm using a stored procedure to handle search on my site, it includes full text searching, relevance and paging. I also wanted it to return the total number of results that would have been returned, had paging not being there. So I've now got my SP returning 2 select statements, the search and just SELECT #totalResults.
Is there any way I can get NHibernate to handle this? I'm currently accessing the ISession's connection, creating a command and executing the SP myself, and mapping the results. This isn't ideal, so I'm hoping I can get NH to handle this for me.
Or if anyone has any other better ways of creating complicated searches etc with NH, I'd really like to hear it.

No, NHibernate only uses the first result set returned by the stored procedure and ignores any others.
You will need to use an alternative method, like ADO.NET.
Or, you can incur processing overhead by having two stored procedures. One for each result set. Gross.

Related

How to handle a big result set without load it into memory

using nhibernate and getNamedQuery, how can I handle a huge result set, without load all rows to a list?
Let's suppose I need to do some logic with each row, but I don't need it after use it.
I don't want to use pagination, because I can't change the source (a stored procedure).
When you say you don't want to use pagination, do you consider using setMaxResults() and setFirstResult() pagination?
Query query = session.getNamedQuery("storedProc");
query.setFirstResult(fromIndex);
query.setMaxResults(pageSize);
I'm not really sure you have any other option with Hibernate. There are all sorts of ways you can incorporate partitioning to split the work, but that's only after you've loaded the data that you want to partition.
Assuming you're doing some sort of logic against a row and storing in the DB, and seeing as you're already using stored procs, your best bet may be another stored proc. Alternatively, if you want to keep your logic outside of your DB, then you should be working off of tables rather than data from a stored proc.

Keeping dynamic out of SQL while using specifications with stored procedures

A specification essentially is a text string representing a "where" clause created by an end user.
I have stored procedures that copy a set of related tables and records to other places. The operation is always the same, but dependent on some crazy user requirements like "products that are frozen and blue and on sale on Tuesday".
What if we fed the user specification (or string parameter) to a scalar function that returned true/false which executed the specification as dynamic SQL or just exec (#variable).
It could tell us whether those records exist. We could add the result of the function to our copy products where clause.
It would keep us from recompiling the copy script each time our where clauses changed. Plus it would isolate the product selection in to a single function.
Anyone ever do anything like this or have examples? What bad things could come of it?
EDIT:
This is the specification I simply added to the end of each insert/select statement:
and exists (
select null as nothing
from SameTableAsOutsideTable inside
where inside.ID = outside.id and -- Join operations to outside table
inside.page in (6, 7) and -- Criteria 1
inside.dept in (7, 6, 2, 4) -- Criteria 2
)
It would be great to feed a parameter into a function that produces records based on the user criteria, so all that above could be something like:
and dbo.UserCriteria( #page="6,7", #dept="7,6,2,4")
Dynamic Search Conditions in T-SQL
When optimizing SQL the important thing is optimizing the access path to data (ie. index usage). This trumps code reuse, maintainability, nice formatting and just about every other development perk you can think of. This is because a bad access path will cause the query to perform hundreds of times slower than it should. The article linked sums up very well all the options you have, and your envisioned function is nowhere on the radar. Your options will gravitate around dynamic SQL or very complicated static queries. I'm afraid there is no free lunch on this topic.
It doesn't sound like a very good idea to me. Even supposing that you had proper defensive coding to avoid SQL injection attacks it's not going to really buy you anything. The code still needs to be "compiled" each time.
Also, it's pretty much always a bad idea to let users create free-form WHERE clauses. Users are pretty good at finding new and innovative ways to bring a server to a grinding halt.
If you or your users or someone else in the business can't come up with some concrete search requirements then it's likely that someone isn't thinking about it hard enough and doesn't really know what they want. You can have pretty versatile search capabilities without letting the users completely loose on the system. Alternatively, look at some of the BI tools out there and consider creating a data mart where they can do these kinds of ad hoc searches.
How about this:
You create another store procedure (instead of function) and pass the right condition to it.
Based on that condition it dumps the record ids to a temp table.
Next you move procedure will read ids from that table and do the needful things?
Or you could create a user function that returns a table which is nothing but the ids of the records that matches your criteria (dynamic)
If I am totally off, then please clarify me.
Hope this helps.
If you are forced to use dynamic queries and you don't have any solid and predefined search requirements, it is strongly recommended to use sp_executesql instead of EXEC . It provides parametrized queries to prevent SQL Injection attacks (to some extent) and It makes use of execution plans to speed up performance. (More info)

Avoiding duplicating SQL code?

I know many ways to avoid duplicating PHP code (in my case PHP). However, I am developing a rather big application that does some calculations on the database with the data it finds, and I have noticed the need to use the same code (parts of SQL) in other places.
I don't like the idea of copying and pasting the same thing over and over again. What is a good way to do this? Should I use stored procedures? I could almost calculate some of the stuff in PHP except that most of the times the queries are calculating values based on also data not returned by the query and it seems stupid to return extra data to PHP so that it could its calculations. Sometimes that may be okay, but now it does not feel so.
What should I do?
For example, all over in many SQL queries I am calculating similar to this:
...
(SELECT SUM(amount) FROM IT INNER JOIN Invoice I WHERE IT.invoiceId=I.id) AS total
...
FROM InvoiceTransaction IT
...
Note that I'm at home now so I'm writing this off the top of my head.
I think you have 2 solutions:
if the SQL returns a small amount of data, I would simply wrap the SQL invocation in a method call and call it (parameterising as necessary)
if the SQL handles a lot of data, I would keep that data in the database and use a stored procedure. You can then call that stored procedure without duplicating the code (but wrap the stored proc call in a function and call it - i.e. as in option 1)
I wouldn't necessarily shy away from stored procedures. But I would advise keeping business logic out of them (keep it in the application itself) and make sure you have sufficient unit testing around it.
I do not prefer store procedure, especially not for the sake of refactoring. You should consider writing a function that return the record you need, and put your SQL queries in that function so you can call it instead of putting your SQL everywhere.
I think we would need an example of a query. Stored procs might be a good option. Or an alternative might be to use views. One advantage in having your queries in views or stored procs is that you can often use the database to see where your tables are used. Disadvantage is that you are locking yourself into one database, however you are probably doing this anyway.

T-SQL optimizing performance of various stored procedures question

so I have written several stored procedures that act on individual rows of data by taking in an ID number. I would like to keep several stored procedures that can call this stored procedure at different levels of my database scheme. For instance, when a row is inserted I call this stored procedure. When something else is modified I would like to call this stored procedure for each line. This is so I can have one set of base code that can be called everywhere else but that acts on different amounts of data. I have been able to produce this result with Cursors, but I am told these are very inefficient. Is there any other way to produce this kind of functionality without sacrificing performance? Thanks.
Yes. Use standard joins to operate on sets rather than RBAR (Row By Agonising Row). i.e. Rather than call a function for each row, design a join that performs the required operation on every applicable row as a set operation.
I often see devs use the 'function operates on a each row', and although this seems to be the obvious way to encapsulate logic, it doesn't perform well on SQL Server or most DB engines.
In some circumstances, a table-valued function can be used effectively (MS SQL Server).
(BTW, you are correct in saying cursors are inefficient).

SQL Server - standard pattern for doing row by row operations on a table/view

I want to iterate through a table/view and then kick off some process (e.g. run a job, send an email) based on some criteria.
My arbitrary constraint here is that I want to do this inside the database itself, using T-SQL on a stored proc, trigger, etc.
Does this scenario require cursors, or is there some other native T-SQL row-based operation I can take advantage of?
Your best bet is a cursor. SQL being declarative and set based, any 'workaround you may find that tries to force SQL to do imperative row oriented operations is unreliable and may break. Eg. the optimizer may cut out your 'operation' from the execution, or do it in strange order or for an unexpected number of times.
The general bad name cursors get is when they are deployed instead of set based operations (like do a computation and update, or return a report) because the developer did not found a set oriented way of doing the same functionality. But for non-SQL operations (ie. launch a process) they are appropriate.
You may also use some variations on the cursor theme, like client side iterating through a result set. That is very similar in spirit to a cursor, although not using explicit cursors.
The standard way to do this would be SSIS. Just use an Execute SQL task to get the rows, and a For Each task container to iterate once per row. Inside the container, run whatever tasks you like, and they'll have access to the columns of each row.
If you are planning on sending an email to each record with an email address (or a similar row-based operation) then you would indeed plan on using a cursor.
There is no other "row-based" operation that you'd do within SQL itself (although I like John's suggestion to investigate SSIS - as long as you have SQL Server Standard or Enterprise). However, if you are summing, searching or any other kind of operation and then kicking off an event once done the entire selection set, then you would certainly not use a cursor. Just so you know - cursors are generally considered a "last resort" approach to problems in SQL Server.
The first thought which comes to my mind when I need to iterate over the result set of a query is to use cursors. Yes, it is a quick and dirty way of programming. But cursors have their setbacks as well - They incur overheads and can be performance bottle necks.
There are alternatives to using cursors. You can try using a temp table with an identity column. Copy you table to the temp table and using a while loop to iterate over the rows. Then based on a condition call your stored procedure.
Here, check this link for alternatives to cursors - http://searchsqlserver.techtarget.com/tip/0,289483,sid87_gci1339242,00.html
cheers