We're using NHibernate in a project that gets data out of the database and writes reports to a separate system. In my scenario, a patient will usually, but not always, have a next appointment scheduled when the report gets written. The query below gets the next appointment data, to include in the report.
private NextFollowup GetNextFollowup(int EncounterID)
{
try
{
NextFollowup myNextF = new NextFollowup();
IQuery myNextQ = this.Session.GetNamedQuery("GetNextFollowup").SetInt32("EncounterID", EncounterID);
myNextF = myNextQ.UniqueResult<NextFollowup>();
return myNextF;
}
catch (Exception e)
{
throw e;
}
}
Here's the question:
Usually this works fine, as there is a single result when an appointment is scheduled. However, in the cases where there is no next followup, I get the error that there is no unique result. I don't really want to throw an exception in this case, I want to return the empty object. If I were to get a list instead of a UniqueResult, I'd get an empty list in the situations where there is no next followup. Is there a better way to handle the situation of "when there is a value, there will be only one" than using a list in the HQL query?
This may work:
private NextFollowup GetNextFollowup(int encounterID)
{
IQuery query = this.Session.GetNamedQuery("GetNextFollowup").SetInt32("EncounterID", encounterID);
// nextFollowup will be either the next instance, or null if none exist in the db.
var nextFollowup = query.Enumerable<NextFollowup>().SingleOrDefault();
return nextFollowup;
}
Note: updated naming to follow Microsoft best practices
The try catch is not serving any purpose here except to loose the stack trace if there is an exception so I've removed it.
If you want to return a new NextFollowup if none exist, you can update the query line to:
var nextFollowup = query.Enumerable<NextFollowup>().SingleOrDefault() ?? new NextFollowup();
Related
I use the following Hibernate query alot when retrieving multiple records by their primary key
Criteria c = session
.createCriteria(Song.class)
.setLockMode(LockMode.NONE)
.setResultTransformer(Criteria.DISTINCT_ROOT_ENTITY)
.add(Restrictions.in("recNo", ids));
List<Song> songs = c.list();
The problem is the number of ids can vary from 1 - 50, and every different number of ids requires a different PreparedStatement. That, combined with the fact that any particular prepared statement is tied to a particular database pool connection means that the opportunity to reuse a PreparedStatement is quite low.
Is there way I can rewrite this so that the same statement can be used with different number of in values, I think I read somewhere it could be done by using ANY instead but cannot find the reference.
This is called "in clause parameter padding" and can be activated with a hibernate property:
<property
name="hibernate.query.in_clause_parameter_padding"
value="true"
</property>
Read more about this topic here: https://vladmihalcea.com/improve-statement-caching-efficiency-in-clause-parameter-padding/
With some help I ended up getting a usual SQL connection from Hibernate, and then using standard SQL with ANY instead of IN. As far as I know using ANY means we only need a single prepared statement per connection so is better then using padded IN's. But because just using SQL not much use if you need to modify the data returned
public static List<SongDiff> getReadOnlySongDiffs(List<Integer> ids)
{
Connection connection = null;
try
{
connection = HibernateUtil.getSqlSession();
String SONGDIFFSELECT = "select * from SongDiff where recNo = ANY(?)";
PreparedStatement ps = connection.prepareStatement(SONGDIFFSELECT);
ps.setObject(1, ids.toArray(new Integer[ids.size()]));
ResultSet rs = ps.executeQuery();
List<SongDiff> songDiffs = new ArrayList<>(ids.size());
while(rs.next())
{
SongDiff sd = new SongDiff();
sd.setRecNo(rs.getInt("recNo"));
sd.setDiff(rs.getBytes("diff"));
songDiffs.add(sd);
}
return songDiffs;
}
catch (Exception e)
{
MainWindow.logger.log(Level.SEVERE, "Failed to get SongDiffsFromDb:" + e.getMessage(), e);
throw new RuntimeException(e);
}
finally
{
SessionUtil.close(connection);
}
}
public static Connection getSqlSession() throws SQLException {
if (factory == null || factory.isClosed()) {
createFactory();
}
return ((C3P0ConnectionProvider)factory.getSessionFactoryOptions().getServiceRegistry().getService(C3P0ConnectionProvider.class)).getConnection();
}
If you're still on an old version of Hibernate as suggested in the comments to Simon's answer here, as a workaround, you could use jOOQ's ParsingConnection to transform your SQL by applying the IN list padding feature transparently behind the scenes. You can just wrap your DataSource like this:
// Input DataSource ds1:
DSLContext ctx = DSL.using(ds1, dialect);
ctx.settings().setInListPadding(true);
// Use this DataSource for your code, instead:
DataSource ds2 = ctx.parsingDataSource();
I've written up a blog post to explain this more in detail here.
(Disclaimer: I work for the company behind jOOQ)
I have a field in my database with duplicates. I want to use it in a dropdown list, which has to return distinct data.
Here is the method that I created to do this:
public IEnumerable<SelectListItem> GetBranches(string username)
{
using (var objData = new BranchEntities())
{
IEnumerable<SelectListItem> objdataresult = objData.ABC_USER.Select(c => new SelectListItem
{
Value = c.BRANCH_CODE.ToString(),
Text = c.BRANCH_CODE
}).Distinct(new Reuseablecomp.SelectListItemComparer());
return objdataresult;
}
}
Here is the class I am using:
public static class Reuseablecomp
{
public class SelectListItemComparer : IEqualityComparer<SelectListItem>
{
public bool Equals(SelectListItem x, SelectListItem y)
{
return x.Text == y.Text && x.Value == y.Value;
}
public int GetHashCode(SelectListItem item)
{
int hashText = item.Text == null ? 0 : item.Text.GetHashCode();
int hashValue = item.Value == null ? 0 : item.Value.GetHashCode();
return hashText ^ hashValue;
}
}
}
Nothing is returned and I get the error below. When I try a basic query without Distinct, everything works fine.
{"The operation cannot be completed because the DbContext has been disposed."}
System.Exception {System.InvalidOperationException}
Inner exception = null
How can I return distinct data for my dropdown?
Technically, your problem can be solved simply by appending .ToList() after your Distinct(...) call. The problem is that queries are evaluated JIT (just in time). In other words, until the actual data the query represents is needed, the query is not actually sent to the database. Calling ToList is one such thing that requires the actual data, and therefore will cause the query to be evaluated immediately.
However, the root cause of your problem is that you are doing this within a using statement. When the method exits, the query has not yet been evaluated, but you have now disposed of your context. Therefore, when it comes time to actually evaluate that query, there's no context to do it with and you get that exception. You should really never use a database context in conjuction with using. It's just a recipe for disaster. Your context should ideally be request-scoped and you should use dependency injection to feed it to whatever objects or methods need it.
Also, for what it's worth, you can simply move your Distinct call to before your Select and you won't need a custom IEqualityComparer any more. For example:
var objdataresult = objData.ABC_USER.Distinct().Select(c => new SelectListItem
{
Value = c.BRANCH_CODE.ToString(),
Text = c.BRANCH_CODE
});
Order of ops does matter here. Calling Distinct first includes it as part of the query to the database, but calling it after, as you're doing, runs it on the in-memory collection, once evaluated. The latter requires, then, custom logic to determine what constitutes distinct items in an IEnumerable<SelectListItem>, which is obviously not necessary for the database query version.
I am running into an issue with doing an SQL Transform Query. I have a replicated Cache setup with thousands of cached items in various Classes. When I run a transform query that returns specific (summary) items from Classes on the Cache, it looks like the query executes just fine and returns a Collection. However, when I iterate through the Collection, after 2,048 items, the individual items in the Collection (which used to be Cast'able until then) are now simple a 'GridCacheQueryResponseEntry', which I can't seem to cast anymore...
Is 2,048 items the limit for a Transform Query Result Set in GridGain?
here's the code I use to query/transform the cache items (Simplified a bit). This works for exactly 2048 items and then throws an Exception:
GridCacheQuery<Map.Entry<UUID, Object>> TypeQuery = queries.createSqlQuery(Object.class, "from Object where Type = ? and Ident regexp ?");
GridClosure<Map.Entry<UUID, Object>, ReturnObject> Trans = new GridClosure<Map.Entry<UUID, Object>, ReturnGeometry>() {
#Override public ReturnObject apply(Map.Entry<UUID, Object> e) {
try {
ReturnObject tmp = e.getValue().getReturnObject();
} catch (Exception ex) {ex.getMessage()); }
return tmp;
}
};
Collection<ReturnObject> results = TypeQuery .execute(Trans,"VarA","VarB").get();
Iterator iter = results.iterator();
while (iter.hasNext()) {
try {
Object item = iter.next();
ReturnObjectpoint = (ReturnObject) item;
} catch (Exception ex) {}
}
There are no such limitations in GridGain. Whenever you execute a query, you have two options:
Call GridCacheQueryFuture.get() method to get the whole result set as a collection. This works only for relatively small result set, because all the rows in the result set have to be loaded to client node's memory.
Use GridCacheFutureMethod.next() to iterate through result set. In this case results will be acquired from remote nodes page by page. When you finished iteration through a page, it's discarded and next one is loaded. So you have only one page at a time which gives you an opportunity to query result sets of any size.
As for GridCacheQueryResponseEntry, you should not cast to it, because it's an internal GridGain class and is actually a simple implementation of Map.Entry interface which represents a key-value pair from GridGain cache.
In case of transform query you will get Map.Entry instances only in transformer, while client node will receive already transformed values, so I'm not sure how it's possible to get them during iteration. Can you provide a small code example of how you execute the query?
I am using criteria to query the database based on the unique key. But I am coming through a weird scenario. After two or three queries, it starts giving me timeout expired error.
using (NHibernate.ISession session = m_SessionFactory.OpenSession())
{
using (ITransaction transacion = session.BeginTransaction())
{
if (cashActivity.ActivityState == ApplicationConstants.TaxLotState.Deleted || cashActivity.ActivityState == ApplicationConstants.TaxLotState.Updated)
{
IList<CashActivity> lsCActivity = RetrieveEquals<CashActivity>("UniqueKey",cashActivity.UniqueKey);
if (lsCActivity != null && lsCActivity.Count > 0)
cashActivity.CashActivityID = lsCActivity[0].CashActivityID;
}
if (cashActivity.ActivityState == ApplicationConstants.TaxLotState.Deleted)
{
session.Delete(cashActivity);
}
else
session.SaveOrUpdate(cashActivity);
}
}
}
public IList<T> RetrieveEquals<T>(string propertyName, object propertyValue)
{
using (Isession session = m_SessionFactory.OpenSession())
{
Icriteria criteria = session.CreateCriteria(typeof(T));
criteria.Add(Restrictions.Eq(propertyName, PropertyValue));
IList<T> matchingObjects = criteria.List<T>();
return matchingObjects;
}
}
I made changes in the code and start using StateLess Session but that change only reduces the frequency of timeout error.
After decugging , I found IList matchingObjects = criteria.List(); is cause of the exception. But this is only returning only one value, so it should not result timeout error since table also doesnt contain more than 100 rows as of now. Any Suggestions??
Unless you have wrapped NHibernate's ISessionFactory in something else, each call to OpenSession() will yield a new session. So the above code involves multiple sessions and it isn't clear if this is required.
Theoretically, a query on the session in RetrieveEquals() could block because of locks taken on the connection used in the calling method. But given the code as shown I can't see anything to prove this.
The calling method first updates a property of cashActivity, then in some cases goes on to delete the object. And there is no Commit(). This seems strange - is this really the used code or might there be a copy/paste error?
You also say "after two or three queries"... do you imply that there is a loop somewhere which isn't shown?
In the dbml designer I've set Update Check to Never on all properties. But i still get an exception when doing Attach: "An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported." This approach seems to have worked for others on here, but there must be something I've missed.
using(TheDataContext dc = new TheDataContext())
{
test = dc.Members.FirstOrDefault(m => m.fltId == 1);
}
test.Name = "test2";
using(TheDataContext dc = new TheDataContext())
{
dc.Members.Attach(test, true);
dc.SubmitChanges();
}
The error message says exactly what is going wrong: You are trying to attach an object that has been loaded from another DataContext, in your case from another instance of the DataContext. Dont dispose your DataContext (at the end of the using statement it gets disposed) before you change values and submit the changes. This should work (all in one using statement). I just saw you want to attach the object again to the members collection, but it is already in there. No need to do that, this should work just as well:
using(TheDataContext dc = new TheDataContext())
{
var test = dc.Members.FirstOrDefault(m => m.fltId == 1);
test.Name = "test2";
dc.SubmitChanges();
}
Just change the value and submit the changes.
Latest Update:
(Removed all previous 3 updates)
My previous solution (removed it again from this post), found here is dangerous. I just read this on a MSDN article:
"Only call the Attach methods on new
or deserialized entities. The only way
for an entity to be detached from its
original data context is for it to be
serialized. If you try to attach an
undetached entity to a new data
context, and that entity still has
deferred loaders from its previous
data context, LINQ to SQL will thrown
an exception. An entity with deferred
loaders from two different data
contexts could cause unwanted results
when you perform insert, update, and
delete operations on that entity. For
more information about deferred
loaders, see Deferred versus Immediate
Loading (LINQ to SQL)."
Use this instead:
// Get the object the first time by some id
using(TheDataContext dc = new TheDataContext())
{
test = dc.Members.FirstOrDefault(m => m.fltId == 1);
}
// Somewhere else in the program
test.Name = "test2";
// Again somewhere else
using(TheDataContext dc = new TheDataContext())
{
// Get the db row with the id of the 'test' object
Member modifiedMember = new Member()
{
Id = test.Id,
Name = test.Name,
Field2 = test.Field2,
Field3 = test.Field3,
Field4 = test.Field4
};
dc.Members.Attach(modifiedMember, true);
dc.SubmitChanges();
}
After having copied the object, all references are detached, and all event handlers (deferred loading from db) are not connected to the new object. Just the value fields are copied to the new object, that can now be savely attached to the members table. Additionally you do not have to query the db for a second time with this solution.
It is possible to attach entities from another datacontext.
The only thing that needs to be added to code in the first post is this:
dc.DeferredLoadingEnabled = false
But this is a drawback since deferred loading is very useful. I read somewhere on this page that another solution would be to set the Update Check on all properties to Never. This text says the same: http://complexitykills.blogspot.com/2008/03/disconnected-linq-to-sql-tips-part-1.html
But I can't get it to work even after setting the Update Check to Never.
This is a function in my Repository class which I use to update entities
protected void Attach(TEntity entity)
{
try
{
_dataContext.GetTable<TEntity>().Attach(entity);
_dataContext.Refresh(RefreshMode.KeepCurrentValues, entity);
}
catch (DuplicateKeyException ex) //Data context knows about this entity so just update values
{
_dataContext.Refresh(RefreshMode.KeepCurrentValues, entity);
}
}
Where TEntity is your DB Class and depending on you setup you might just want to do
_dataContext.Attach(entity);