Nhibernate - Getting a list - nhibernate

I am trying to fetch a list from my database fulfilling a given criterion. The statement I am using is :
var products = session
.CreateCriteria(typeof(Product))
.Add(Restrictions.Eq("Category", category))
.List();
Where, product is my Domain object
session is the current active session.
Whenever I use this statement, NHibernate queries the database everytime to fetch me the list instead of doing it just the 1st time and then returning me the result from the cache from 2nd time onwards. Is there anything I am doing incorrect?

It has to hit the database, but only to retrieve the PK values in the query results.
Demonstration: set a breakpoint on this line, execute it once, then pause before it executes again. Modify the database directly to change one of the object's values, then run the line again. Check the results. The entities returned should not reflect the changes you made to the database (i.e., they came from the session cache).

Related

Access SQL - Update statement can't set field to NULL. Says

I've inherited an application that uses Access as the database. I have a field (Date/Time, not required) with some values I need to set to null after a specific date.
To update it I have a little program that runs the query and tells me how many rows affected. (Access isn't installed on the server, the mdb is constantly locked. So I can't download, update, replace. But I can use a simple VB program)
Anyway I need to set some values to null, and to do that I use the following query:
UPDATE [AppPosting] SET [approvedTime] = NULL WHERE [approvedTime] >= #25/10/2022 00:00:00#
Running it gives me "82 affected rows", and running it again gives me the same amount of affected rows. If I open access and look in the (local copy of the) database I can see they haven't update. If I run the same query in access also get 82 affected rows, but they're also set to null.
So what gives? My update says it updates through OleDbConnection, but doesn't update. Whereas through access it says it updated, and actually updates?

Can I use Postgres transactions only for write queries and use read queries without transaction?

What happens if I use transactions for write operations but don't use those for read operations?
My use case:
get some data1 from db (without transaction)
create some data2 using data1 (with transaction)
get some data3 from db (without transaction)
create some data4 using data2 and data3 (with transaction)
If no error commit otherwise rollback.
Is it something wrong that I am not using transaction for the 2 read queries?
Edit/Add/Delete Records
A Transaction is used when you want to ensure that a bunch of row edit/add/delete queries are committed together to the db. In other wards, you want to ensure that all sql commands in that bunch runs successfully or don't commit any of the commands. E.g. you are saving a new record for a users table and a users address table together, but you might not want to write to the users table if the address table record fails for some reason. In this case you would use a transaction for both commands.
Read Records
If you understand the above, you know you don't need transactions for read sql commands.
Was the answer helpful? Consider marking the answer tick and upvoting. Thanks 🙏
If that sequence is fine or not depends on your requirements. With your current procedure, the following could happen:
if you encounter an error before step 2 finishes, nothing has changed
if you encounter an error before step 5 finishes, you have only data2, but not data4
if no error happens before step 5 has completed, you have data2 and data4
If that is fine for you, there is no problem with what you are doing.
So if you're going to query the database for the same rows that you just inserted using a transaction, but haven't committed the transaction yet, then you should read from the database using the transaction.
Eg. You create a user, then you need to create an external account for this user, and the method that creates that external account reads the user from the database and does not get it as a parameter. You can either modify the create external account method so it gets the user as a parameter and then pass to it the just created user, either you can keep the method like it is, but you have to make sure you pass the transaction to it. Otherwise, if the transaction is not committed and is not passed to the read query, the created user won't be found.
Ideally you should avoid this thing by passing the input data to the create external account too, so you don't need to read the user from db, but if for some reason this is not possible, then make sure you read from the db using the transaction.

IBatis2 dynamic update queries execution

I am working on executing a set of update queries which are dynamically generated to be executed on SQL Server using iBatis2. I have written update element in sqlMap as below which executes within scope of a transaction :
<update id="updateDepartments" parameterClass="Office">
declare #sql nvarchar(400);
<iterate property="departmentList">
<!-- form the update query and store in #sql-->
exec sp_executesql #sql
</iterate>
</update>
I have a couple of questions related to the way above queries execute.
Do they execute as a batch or individually i.e. does the number of network calls to database server are equal to the number of update queries generated ?
How can the client code know how many rows actually got updated if the queries execute ? The return value shows as 1 always even though multiple rows got updated.
Is there a better way to do this using iBatis2 ?
Example of Dynamic update queries formed are:
update Department set cost1=1000 where department_name='sales'
update Department set cost2=2000 where department_name='finance'
update Department set cost3=3000 where department_name='marketing'
Parameters passed as part of paramterClass are List of objects containing:
1. Department name
2. Column name to be updated
3. Value to be updated for column in 2.
example,
['sales', 'cost1', 1000]
['finance', 'cost2', 2000]
It may be possible to perform this as a batch execution but I'm not positive it can be. I haven't used iBatis 2 in a long time now.
What I'm sure will work is executing each SQL statement separately. There's pretty much no overhead in calling it multiple times, unless you are performing thousands of updates at once. Are you?
I think you could call is each time using a parameter class like:
class updateDptParams {
String name;
String column;
String value;
// setters & getters omitted for brevity
}
Then, the mapper could look like:
<update id="updateDepartment" parameterClass="updateDptParams">
update Department set ${column}=${value} where department_name=#{name}
</update>
Note that column and value are injected as strings (using ${}), since they are supposed to have variable types. However, name is a standard iBatis JDBC parameter (using #{}) since it's always a VARCHAR. Make sure injected parameters come from a known source, and not from the user interface or other external source; otherwise your code will be vulnerable to SQL Injection.
Finally, if you are updating thousands of rows, this solution can still be good. It could be improved batching updates, or performing multiple updates at once using complex SQL statements. I'm not sure how easy or error-prone this potential optimization could be, though.

How does updating or inserting while looping through a result set affect the result set itself?

suppose I fetch an RS, based on certain conditions and start looping though it , then , on certain situations , I update insert or delete records, which may have been part of this RS, using separate prepared statements.
How does this effect the result set ? My inclination is to think that since the Statement which fetched this RS was executed earlier in the process, this RS will now be blind to the changes made by my prepared statements.
Pseudocode :
Preapare Statement ps1
execute ps1 -> get Result Set rs1
loop through rs1
{
Update or delete records using other prepared statements
}
Read Consistency
Oracle guarantees that the set of data seen by a statement is consistent with respect to a single point in time and does not change during statement execution (statement-level read consistency)
That is why, If you have a query such as
insert into t
select * from t;
Oracle will simply duplicate all rows without going into an infinite loop or raising an error.
There are other implications because of this.
1) Oracle reads from the rollback segment to provide you with this read-consistent image of your data. So, if your rollback segments are nor correctly sized, or you commit across fetches, you'll get the "Snapshot too old" error, since your rollback data is no longer available.
Ok , so if that is the case , is it possible to refresh it while making updates ? I mean aside from making the cursor updateable and using the inbuilt functions of the result set.
2) Each query sees the data at the point of time it began. If by refresh you mean refiring the query, then the data you see might be different again, if you do commits in your pl/sql body or within a pl/sql loop or if some other transactions are running in your system concurrently.
It doesn't. The result set of a query/cursor is kept by the database, even if you alter or remove the rows that are the base of this result set. So you are correct, it is blind to changes made after the statement is executed.

Delete operation takes some time to accomplish

I have a method to delete records in a DB, the query is created correctly and the records are deleted but after 40 seconds to 1 minute
If i execute the query in the DB prompt the record is deleted immediately
The code i have is only :
getting the database connection
preparing the statement passing 3 variables to the "delete from" sentence
calling executeUpdate on the statement
calling commit on the connection
closing the db connection
What could it be wrong? any clue?
You are implicitly assuming that a DELETE statement is very trivial in all cases, which is not always true. At very least, it needs to find the records it want to remove in the table. This may require an entire table scan if, for example, the WHERE predicate cannot leverage an existing index.