MyBatis SQL transactions with Struts 2 - sql

I'm using MyBatis with Struts 2 and I'm curious about setting up custom sessions. I will explain more below.
Currently I have a bunch of DAOs which open a session, execute their query, commit if necessary, and close their connection:
public String selectUsername(Integer id){
session = getSession(); //opens the session
mapper = session.getMapper(UserMapper.class); //gets the mapper
String name = mapper.selectUsername(id); //executes query
session.close(); //closes session
return name;
}
Sometimes I have complex things I need to do such as inserting a large object which consists of smaller objects which each have their own DAO:
public boolean insertNewProfile(UserProfile profile)
{
session = getSession();
mapper = session.getMapper(UserMapper.class);
int result = mapper.insertNewProfile(profile);
session.commit();
int id = mapper.selectId(profile.getUserName());
for(UserSkill skill : profile.getSkills())
skill.setUserID(id);
// insert the user's skills in the database
boolean skillResult = skillDAO.insertSkills(profile.getSkills());
session.close();
return (result > 0) && skillResult;
}
In the above, you can see that we are not only inserting the UserProfile, but we are also inserting the associated UserSkills by using the UserSkillDAO. That DAO also opens the session, commits, and closes the session. I'm trying to make this compound query to the database in one session.
I have tried making functions to open, commit, and close the session at the action level (Struts 2) but when I tried that I get errors (### Error querying database. Cause: org.apache.ibatis.executor.ExecutorException: Executor was closed. ### The error may exist in . . .) about the session being closed prematurely to my closing it. I have looked into using MyBatis transactions but a lot of the examples online are for older versions of iBatis or are examples using Spring.
Example of opening/closing sessions at the action level:
public String execute(){
DAOFactory.openSession();
logger.info("Creating ProjectDAO");
projectDAO = DAOFactory.getDao(ProjectDAO.class);
needDAO = DAOFactory.getDao(ProjectNeedDAO.class);
majorDAO = DAOFactory.getDao(ProjectNeedMajorsDAO.class);
skillDAO = DAOFactory.getDao(ProjectNeedSkillDAO.class);
logger.info("Performing selectAllProjects query.");
projects = projectDAO.selectAllProjects();
// loop over the projects and add the project needs
for(int i = 0; i < projects.size(); i++){
// get all of the project needs for a project
projects.get(i).setProjectNeeds((ArrayList<ProjectNeed>)needDAO.selectProjectNeeds(projects.get(i).getProjectID()));
// loop over all of the project needs and add the majors and skills needed
for(int j = 0; j < projects.get(i).getProjectNeeds().size(); j++){
ArrayList<ProjectNeed> needs = projects.get(i).getProjectNeeds();
needs.get(j).setMajors((ArrayList<ProjectNeedMajor>)majorDAO.selectProjectNeedMajors(needs.get(j).getNeedID()));
needs.get(j).setSkills((ArrayList<ProjectNeedSkill>)skillDAO.selectProjectNeedSkills(needs.get(j).getNeedID()));
}
}
DAOFactory.closeSession();
logger.info("Returning success");
return "success";
}
Is there an easy way for me to rework my code so that I can choose when to open and close sessions at the action level instead of at the DAO level as I believe the opening and closing of sessions involves a lot of overhead and is slowing down the performance of my website?

Related

RavenDB processing all documents of a certain type

I have some problem with updating all documents in a collection. What I need to do: I need to iterate through ~2 million docs load each doc into memory, parse HTML from one of fields of a doc and save the doc back to DB.
I tried take/skip logic with/without indexes but Id etc. but some records still remain unchanged (even tested for 1000 records with 128 records in a page). In the process of updating documents no more records are inserted. Simple patching (patching API) does not work for this as the update I need to perform is quite complex
Please help with this. Thanks
Code:
public static int UpdateAll<T>(DocumentStore docDB, Action<T> updateAction)
{
return UpdateAll(0, docDB, updateAction);
}
public static int UpdateAll<T>(int startFrom, DocumentStore docDB, Action<T> updateAction)
{
using (var session = docDB.OpenSession())
{
int queryCount = 0;
int start = startFrom;
while (true)
{
var current = session.Query<T>().Take(128).Skip(start).ToList();
if (current.Count == 0)
break;
start += current.Count;
foreach (var doc in current)
{
updateAction(doc);
}
session.SaveChanges();
queryCount += 2;
if (queryCount >= 30)
{
return UpdateAll(start, docDB, updateAction);
}
}
}
return 1;
}
Move your session.SaveChanges(); to outside the while loop.
As per Raven's session design, you can only do 30 interactions with the database during any given instance of a session.
If you refactor your code to only SaveChanges() once (or very few times) per using block, it should work.
For more information, check out the Raven docs : Understanding The Session Object - RavenDB

RavenDB fails with ConcurrencyException when using new transaction

This code always fails with a ConcurrencyException:
[Test]
public void EventOrderingCode_Fails_WithConcurrencyException()
{
Guid id = Guid.NewGuid();
using (var scope1 = new TransactionScope())
using (var session = DataAccess.NewOpenSession)
{
session.Advanced.UseOptimisticConcurrency = true;
session.Advanced.AllowNonAuthoritativeInformation = false;
var ent1 = new CTEntity
{
Id = id,
Name = "George"
};
using (var scope2 = new TransactionScope(TransactionScopeOption.RequiresNew))
{
session.Store(ent1);
session.SaveChanges();
scope2.Complete();
}
var ent2 = session.Load<CTEntity>(id);
ent2.Name = "Gina";
session.SaveChanges();
scope1.Complete();
}
}
It fails at the last session.SaveChanges. Stating that it is using a NonCurrent etag. If I use Required instead of RequiresNew for scope2 - i.e. using the same Transaction. It works.
Now, since I load the entity (ent2) it should be using the newest Etag unless this is some cached value attached to scope1 that I am using (but I have disabled Caching). So I do not understand why this fails.
I really need this setup. In the production code the outer TransactionScope is created by NServiceBus, and the inner is for controlling an aspect of event ordering. It cannot be the same Transaction.
And I need the optimistic concurrency too - if other threads uses the entity at the same time.
BTW: This is using Raven 2.0.3.0
Since no one else have answered, I had better give it a go myself.
It turns out this was a human error. Due to a bad configuration of our IOC container the DataAccess.NewOpenSession gave me the same Session all the time (across other tests). In other words Raven works as expected :)
Before I found out about this I also experimented with using TransactionScopeOption.Suppress instead of RequiresNew. That also worked. Then I just had to make sure that whatever I did in the suppressed scope could not fail. Which was a valid option in my case.

Delaying writes to SQL Server

I am working on an app, and need to keep track of how any views a page has. Almost like how SO does it. It is a value used to determine how popular a given page is.
I am concerned that writing to the DB every time a new view needs to be recorded will impact performance. I know this borderline pre-optimization, but I have experienced the problem before. Anyway, the value doesn't need to be real time; it is OK if it is delayed by 10 minutes or so. I was thinking that caching the data, and doing one large write every X minutes should help.
I am running on Windows Azure, so the Appfabric cache is available to me. My original plan was to create some sort of compound key (PostID:UserID), and tag the key with "pageview". Appfabric allows you to get all keys by tag. Thus I could let them build up, and do one bulk insert into my table instead of many small writes. The table looks like this, but is open to change.
int PageID | guid userID | DateTime ViewTimeStamp
The website would still get the value from the database, writes would just be delayed, make sense?
I just read that the Windows Azure Appfabric cache does not support tag based searches, so it pretty much negates my idea.
My question is, how would you accomplish this? I am new to Azure, so I am not sure what my options are. Is there a way to use the cache without tag based searches? I am just looking for advice on how to delay these writes to SQL.
You might want to take a look at http://www.apathybutton.com (and the Cloud Cover episode it links to), which talks about a highly scalable way to count things. (It might be overkill for your needs, but hopefully it gives you some options.)
You could keep a queue in memory and on a timer drain the queue, collapse the queued items by totaling the counts by page and write in one SQL batch/round trip. For example, using a TVP you could write the queued totals with one sproc call.
That of course doesn't guarantee the view counts get written since its in memory and latently written but page counts shouldn't be critical data and crashes should be rare.
You might want to have a look at how the "diagnostics" feature in Azure works. Not because you would use diagnostics for what you are doing at all, but because it is dealing with a similar problem and may provide some inspiration. I am just about to implement a data auditing feature and I want to log that to table storage so also want to delay and bunch the updates together and I have taken a lot of inspiration from diagnostics.
Now, the way Diagnostics in Azure works is that each role starts a little background "transfer" thread. So, whenever you write any traces then that gets stored in a list in local memory and the background thread will (by default) bunch all the requests up and transfer them to table storage every minute.
In your scenario, I would let each role instance keep track of a count of hits and then use a background thread to update the database every minute or so.
I would probably use something like a static ConcurrentDictionary (or one hanging off a singleton) on each webrole with each hit incrementing the counter for the page identifier. You'd need to have some thread handling code to allow multiple request to update the same counter in the list. Alternatively, just allow each "hit" to add a new record to a shared thread-safe list.
Then, have a background thread once per minute increment the database with the number of hits per page since last time and reset the local counter to 0 or empty the shared list if you are going with that approach (again, be careful about the multi threading and locking).
The important thing is to make sure your database update is atomic; If you do a read-current-count from the database, increment it and then write it back then you may have two different web role instances doing this at the same time and thus losing one update.
EDIT:
Here is a quick sample of how you could go about this.
using System.Collections.Concurrent;
using System.Data.SqlClient;
using System.Threading;
using System;
using System.Collections.Generic;
using System.Linq;
class Program
{
static void Main(string[] args)
{
// You would put this in your Application_start for the web role
Thread hitTransfer = new Thread(() => HitCounter.Run(new TimeSpan(0, 0, 1))); // You'd probably want the transfer to happen once a minute rather than once a second
hitTransfer.Start();
//Testing code - this just simulates various web threads being hit and adding hits to the counter
RunTestWorkerThreads(5);
Thread.Sleep(5000);
// You would put the following line in your Application shutdown
HitCounter.StopRunning(); // You could do some cleverer stuff with aborting threads, joining the thread etc but you probably won't need to
Console.WriteLine("Finished...");
Console.ReadKey();
}
private static void RunTestWorkerThreads(int workerCount)
{
Thread[] workerThreads = new Thread[workerCount];
for (int i = 0; i < workerCount; i++)
{
workerThreads[i] = new Thread(
(tagname) =>
{
Random rnd = new Random();
for (int j = 0; j < 300; j++)
{
HitCounter.LogHit(tagname.ToString());
Thread.Sleep(rnd.Next(0, 5));
}
});
workerThreads[i].Start("TAG" + i);
}
foreach (var t in workerThreads)
{
t.Join();
}
Console.WriteLine("All threads finished...");
}
}
public static class HitCounter
{
private static System.Collections.Concurrent.ConcurrentQueue<string> hits;
private static object transferlock = new object();
private static volatile bool stopRunning = false;
static HitCounter()
{
hits = new ConcurrentQueue<string>();
}
public static void LogHit(string tag)
{
hits.Enqueue(tag);
}
public static void Run(TimeSpan transferInterval)
{
while (!stopRunning)
{
Transfer();
Thread.Sleep(transferInterval);
}
}
public static void StopRunning()
{
stopRunning = true;
Transfer();
}
private static void Transfer()
{
lock(transferlock)
{
var tags = GetPendingTags();
var hitCounts = from tag in tags
group tag by tag
into g
select new KeyValuePair<string, int>(g.Key, g.Count());
WriteHits(hitCounts);
}
}
private static void WriteHits(IEnumerable<KeyValuePair<string, int>> hitCounts)
{
// NOTE: I don't usually use sql commands directly and have not tested the below
// The idea is that the update should be atomic so even though you have multiple
// web servers all issuing similar update commands, potentially at the same time,
// they should all commit. I do urge you to test this part as I cannot promise this code
// will work as-is
//using (SqlConnection con = new SqlConnection("xyz"))
//{
// foreach (var hitCount in hitCounts.OrderBy(h => h.Key))
// {
// var cmd = con.CreateCommand();
// cmd.CommandText = "update hits set count = count + #count where tag = #tag";
// cmd.Parameters.AddWithValue("#count", hitCount.Value);
// cmd.Parameters.AddWithValue("#tag", hitCount.Key);
// cmd.ExecuteNonQuery();
// }
//}
Console.WriteLine("Writing....");
foreach (var hitCount in hitCounts.OrderBy(h => h.Key))
{
Console.WriteLine(String.Format("{0}\t{1}", hitCount.Key, hitCount.Value));
}
}
private static IEnumerable<string> GetPendingTags()
{
List<string> hitlist = new List<string>();
var currentCount = hits.Count();
for (int i = 0; i < currentCount; i++)
{
string tag = null;
if (hits.TryDequeue(out tag))
{
hitlist.Add(tag);
}
}
return hitlist;
}
}

Why am I getting this nhibernate NonUniqueObjectException?

The following method queries my database, using a new session. If the query succeeds, it attaches (via "Lock") the result to a "MainSession" that is used to support lazy loading from a databound WinForms grid control.
If the result is already in the MainSession, I get the exception:
NHibernate.NonUniqueObjectException : a different object with the same identifier value was already associated with the session: 1, of entity: BI_OverlordDlsAppCore.OfeDlsMeasurement
when I attempt to re-attach, using the Lock method.
This happens even though I evict the result from the MainSession before I attempt to re-attach it.
I've used the same approach when I update a result, and it works fine.
Can anyone explain why this is happening?
How should I go about debugging this problem?
public static OfeMeasurementBase GetExistingMeasurement(OverlordAppType appType, DateTime startDateTime, short runNumber, short revision)
{
OfeMeasurementBase measurement;
var mainSession = GetMainSession();
using (var session = _sessionFactory.OpenSession())
using (var transaction = session.BeginTransaction())
{
// Get measurement that matches params
measurement =
session.CreateCriteria(typeof(OfeMeasurementBase))
.Add(Expression.Eq("AppType", appType))
.Add(Expression.Eq("StartDateTime", startDateTime))
.Add(Expression.Eq("RunNumber", runNumber))
.Add(Expression.Eq("Revision", revision))
.UniqueResult() as OfeMeasurementBase;
// Need to evict from main session, to prevent potential
// NonUniqueObjectException if it's already in the main session
mainSession.Evict(measurement);
// Can't be attached to two sessions at once
session.Evict(measurement);
// Re-attach to main session
// Still throws NonUniqueObjectException!!!
mainSession.Lock(measurement, LockMode.None);
transaction.Commit();
}
return measurement;
}
I resolved the problem after finding this Ayende post on Cross Session Operations.
The solution was to use ISession.Merge to get the detached measurement updated in the main session:
public static OfeMeasurementBase GetExistingMeasurement(OverlordAppType appType, DateTime startDateTime, short runNumber, short revision)
{
OfeMeasurementBase measurement;
var mainSession = GetMainSession();
using (var session = _sessionFactory.OpenSession())
using (var transaction = session.BeginTransaction())
{
// Get measurement that matches params
measurement =
session.CreateCriteria(typeof(OfeMeasurementBase))
.Add(Expression.Eq("AppType", appType))
.Add(Expression.Eq("StartDateTime", startDateTime))
.Add(Expression.Eq("RunNumber", runNumber))
.Add(Expression.Eq("Revision", revision))
.UniqueResult() as OfeMeasurementBase;
transaction.Commit();
if (measurement == null) return null;
// Merge back into main session, in case it has changed since main session was
// originally loaded
var mergedMeasurement = (OfeMeasurementBase)mainSession.Merge(measurement);
return mergedMeasurement;
}
}

Entity framework transaction error after deleting a row and inserting a new row with same primary key

I am using ASP.NET MVC2 in Visual Studio 2008. I believe the SQL Server is 2005. I am using Entity Framework to access the database.
I've got the following table with a composite primary key based upon iRequest and sCode:
RequestbyCount
iRequest integer
sCode varchar(10)
iCount integer
iRequest is a foreign key to a list of requests.
When a request is updated, I want to clear out the existing RequestbyCounts for that request and then add in the new RequestbyCounts. More than likely, the only difference between the old rows will be the Count.
For my code, I attempt it as follows:
//delete ALL our old requests
var oldEquipList = (from eq in myDB.dbEquipmentRequestedbyCountSet
where eq.iRequestID == oldData.iRequestID
select eq).ToList();
foreach (var oldEquip in oldEquipList)
{
myDB.DeleteObject(oldEquip);
}
// myDB.SaveChanges(); <---- adding this line makes it work
//add in our new requests
foreach (var equip in newData.RequestList) //newData.RequestList is a List object
{
if (equip.iCount > 0)
{
//add in our actual request items
RequestbyCount reqEquip = new RequestbyCount();
reqEquip.sCodePrefix = equip.sCodePrefix;
reqEquip.UserRequest = newRequest;
reqEquip.iCount = equip.iCount;
myDB.AddToRequestbyCount(reqEquip);
}
}
myDB.SaveChanges(); //save our results
The issue is when I run it with the intermediate SaveChanges line uncommented, it works as desired. But my understanding is that doing this breaks the transaction apart.
If I leave the intermediate SaveChanges commented out as above, the process fails and I receive a
Violation of PRIMARY KEY constraint
'PK_RequestbyCount'. Cannot insert
duplicate key in object
'dbo.RequestbyCount'.\r\nThe statement
has been terminated.
Obviously, without doing the intermediate SaveChanges, the old rows are NOT removed as desired.
I do NOT want the results saved unless everything succeeds.
I would rather not take the following approach:
//add in our new requests
foreach (var equip in newData.RequestList)
{
if (equip.iCount > 0) && (**it isn't in the database**)
{
//add in our actual request items
RequestbyCount reqEquip = new RequestbyCount();
reqEquip.sCodePrefix = equip.sCodePrefix;
reqEquip.UserRequest = newRequest;
reqEquip.iCount = equip.iCount;
myDB.AddToRequestbyCount(reqEquip);
} else if (**it is in the database**) && (equip.iCount == 0) {
**remove from database**
} else {
**edit the value in the database**
}
}
Am I stuck doing the above code that basically makes a bunch of little calls to the database to check if an item exists?
Or is there some method that tell the framework to attempt to delete the rows I want but rollback if there is a failure inserting the new rows?
You don't appear to be using transactions at all. You need to wrap all your code in
using (TransactionScope transaction = new TransactionScope())
{
...
transaction.Complete();
}
Even better
using (TransactionScope transaction = new TransactionScope())
{
try
{
your code
transaction.Complete();
}
catch(Exception)
{
// handle error
}
}
Using the try/catch block will ensure that the transaction is not committed if an exception occurs, which is what you stated you wanted.
Lot's more on entity framework transactions at Microsoft's web site. The explanations there are quite good.