In NHibernate 3.1, ISession.SaveOrUpdateCopy() has been marked as deprecated. The documentation suggests using Merge() instead. The documentation for each is as follows:
SaveOrUpdateCopy(object obj)
Copy the state of the given object onto the persistent object with the same identifier. If there is no persistent instance currently associated with
the session, it will be loaded. Return the persistent instance. If the
given instance is unsaved or does not exist in the database, save it and
return it as a newly persistent instance. Otherwise, the given instance
does not become associated with the session.
Merge(object obj)
Copy the state of the given object onto the persistent object with the same
identifier. If there is no persistent instance currently associated with
the session, it will be loaded. Return the persistent instance. If the
given instance is unsaved, save a copy of and return it as a newly persistent
instance. The given instance does not become associated with the session.
This operation cascades to associated instances if the association is mapped
with cascade="merge".
The semantics of this method are defined by JSR-220.
They look nearly identical to me, but there are bound to be some subtleties involved. If so, what are they?
SaveOrUpdateCopy is now considered obsolete and thus Merge is meant to take over for it (hence its extreme similarity).
They are pretty much the same except I don't think those cascade options were available with SaveOrUpdateCopy. However, that point is moot as Merge should be method you use.
UPDATE: I went in to the source code of NHibernate just to make sure they are as similar as I was thinking and here is what I found.
Both Merge and SaveOrUpdateCopy have very similar implementations:
public object Merge(string entityName, object obj)
{
using (new SessionIdLoggingContext(SessionId))
{
return FireMerge(new MergeEvent(entityName, obj, this));
}
}
public object SaveOrUpdateCopy(object obj)
{
using (new SessionIdLoggingContext(SessionId))
{
return FireSaveOrUpdateCopy(new MergeEvent(null, obj, this));
}
}
Their FireXXXX methods are also very similar:
private object FireMerge(MergeEvent #event)
{
using (new SessionIdLoggingContext(SessionId))
{
CheckAndUpdateSessionStatus();
IMergeEventListener[] mergeEventListener = listeners.MergeEventListeners;
for (int i = 0; i < mergeEventListener.Length; i++)
{
mergeEventListener[i].OnMerge(#event);
}
return #event.Result;
}
}
private object FireSaveOrUpdateCopy(MergeEvent #event)
{
using (new SessionIdLoggingContext(SessionId))
{
CheckAndUpdateSessionStatus();
IMergeEventListener[] saveOrUpdateCopyEventListener = listeners.SaveOrUpdateCopyEventListeners;
for (int i = 0; i < saveOrUpdateCopyEventListener.Length; i++)
{
saveOrUpdateCopyEventListener[i].OnMerge(#event);
}
return #event.Result;
}
}
The methods are exactly the same except they draw on different event listener lists, but even the types of the lists (IMergeEventListener) are the same!
Looking at the listener lists, they are both initialized with a default listener. The default listener for the Merge listen handlers is of type DefaultMergeEventListener while the SaveOrUpdateCopy is DefaultSaveOrUpdateCopyEventListener. Thus, the difference between them is just the difference in these two implementations (that is if you keep the default listener, which is 99% of the time).
However, the real interesting fact IS the difference in implementation. If you look at DefaultSaveOrUpdateCopyEventListener you get this:
public class DefaultSaveOrUpdateCopyEventListener : DefaultMergeEventListener
{
protected override CascadingAction CascadeAction
{
get { return CascadingAction.SaveUpdateCopy; }
}
}
This means the default behavior for Merge and SaveOrUpdateCopy only differs in the cascading actions, everything else is exactly the same.
Related
I wish to access many classes and variables, I would like to do this by dynamically setting the class name and variable name. Currently I am using
MyClass["myVariable1"]
to dynamically access the variable name
MyClass.myVariable1
I want to also dynanmically acces the class name, something like
["MyClass"]["myVariable1"]
But this does not work.
The purpose is that I have shared object with many user settings, I want to iterate through the shared object and set all the user settings across all the classes. I think if I cant dynamically access the class I must have a statement for each and every class name/variable.
I advise against such a practice. Although technically possible, it is like welcoming a disaster into the app architecture:
You rely on something you have no apparent control of: on the way Flash names the classes.
You walk out of future possibility to protect your code with identifier renaming obfuscation because it will render your code invalid.
Compile time error checks is better than runtime, and you are leaving it to runtime. If it happens to fail in non-debug environment, you will never know.
The next developer to work with your code (might be you in a couple of years) will have hard time finding where the initial data coming from.
So, having all of above, I encourage you to switch to another model:
package
{
import flash.net.SharedObject;
public class SharedData
{
static private var SO:SharedObject;
static public function init():void
{
SO = SharedObject.getLocal("my_precious_shared_data", "/");
}
static public function read(key:String):*
{
// if (!SO) init();
return SO.data[key];
}
static public function write(key:String, value:*):void
{
// if (!SO) init();
SO.data[key] = value;
SO.flush();
}
// Returns stored data if any, or default value otherwise.
// A good practice of default application values that might
// change upon user activity, e.g. sound volume or level progress.
static public function readSafe(key:String, defaultValue:*):*
{
// if (!SO) init();
return SO.data.hasOwnProperty(key)? read(key): defaultValue;
}
}
}
In the main class you call
SharedData.init();
// So now your shared data are available.
// If you are not sure you can call it before other classes will read
// the shared data, just uncomment // if (!SO) init(); lines in SharedData methods.
Then each class that feeds on these data should have an initialization block:
// It's a good idea to keep keys as constants
// so you won't occasionally mistype them.
// Compile time > runtime again.
static private const SOMAXMANA:String = "maxmana";
static private const SOMAXHP:String = "maxhp";
private var firstTime:Boolean = true;
private var maxmana:int;
private var maxhp:int;
// ...
if (firstTime)
{
// Make sure it does not read them second time.
firstTime = false;
maxhp = SharedData.readSafe(SOMAXHP, 100);
maxmana = SharedData.readSafe(SOMAXMANA, 50);
}
Well, again. The code above:
does not employ weird practices and easy to understand
in each class anyone can clearly see where the data come from
will be checked for errors at compile time
can be obfuscated and protected
You can try getting the class into a variable and going from there:
var myClass:Class = getDefinitionByName("MyClass") as Class;
myClass["myVariable1"] = x;
I have come to really appreciate the benefits of using objects to deploy a given application within the DigitalMicrograph environment via the DMS language. The object-oriented approach opens the door to the use of reusable design patterns involving collaborating objects, e.g. Model-View-Controller (MVC). However, objects within DM seem to be highly volatile due to the use of automatic reference counting to manage their life cycles. In order for an MVC trio, or any other set of collaborating objects, to stay alive long enough to be useful, at least one of them must be rooted in a non-volatile object managed by the DM application. So far, the only such objects I have come across within DM are those based on the UIFrame class (i.e. modeless dialogs and UI palettes). For MVC implementations, this works out fine since it makes sense to implement the View as a UIFrame object. It's just a bit unconventional in that the View object becomes the root object that keeps the MVC trio alive and functioning. Normally it is the Controller object that is rooted in the application and manages the Model and View objects. But what about design patterns that do not involve UI? Is there any (acceptable) way to give a set of collaborating objects persistence without rooting them in a UIFrame object? Are there other application-rooted object types that can serve this purpose? I assume setting up a reference cycle would not be an acceptable approach due to the inevitable risk of memory leaks.
The third, and by far the best and cleanest solution is to launch your object as a 'listener' to some event. As you are looking for an object which should stay in scope as long as DigitalMicrograph is open, its possibly best to listen to the application itself. By listening for the "about_to_close" message you also get the ideal handle to properly release all resources before shutdown. The code is the following:
From my 3 answers this is the one I would use. (The others should just illustrate options.)
class MyPermanentObject
{
MyPermanentObject( object self ) { result("created MyPermanentObject :"+self.ScriptObjectGetID()+"\n");}
~MyPermanentObject( object self ) { result("killed MyPermanentObject :"+self.ScriptObjectGetID()+"\n");}
void DeInitialize( object self, number eventFlags, object appObj )
{
OKDialog( "The application is closing now. Deinitialize stuff properly!" );
}
}
{
object listener = Alloc( MyPermanentObject )
ApplicationAddEventListener( listener, "application_about_to_close:DeInitialize" )
}
I can think of various ways to get this persistence, but the one which jumped to mind first was to launch one object into a background thread, like in the example below. The actual background thread can check every so often if the object should still remain, and by sharing the object ID with the outside world, other objects (which don't have to be persistent) can access the "anchored" object.
A word of warning though: If you keep things in memory like this, you have to be careful when closing DigitalMicrograph. If the object hangs on to some items DM wants to destroy, you might see errors or crashes at the end.
// This is the object "anchored". It will remain in memory, because we launch it on a separate thread.
// On this thread, it loops until a variable is set to false (or until SHIFT is pressed)
Class IPersist : Thread
{
number keepme
IPersist( object self ) { result("created IPersist:"+self.ScriptObjectGetID()+"\n");}
~IPersist( object self ) { result("killed IPersist:"+self.ScriptObjectGetID()+"\n\n\n\n");}
void CallFromOutside( object self ) { Result( "\t IPersist can be used!\n" ); }
void StopFromOutside( object self ) { keepme = 0; }
void RunThread( object self )
{
keepme = 1
Result( "\t Called once at start.\n")
While( keepme && !ShiftDown() ) yield()
Result( "\t Finished.\n")
}
}
// Just and example class used to access the 'anchored' object
Class SomethingElse
{
number keepID
SomethingElse( object self ) { result("created SomethingElse:"+self.ScriptObjectGetID()+"\n");}
~SomethingElse( object self ) { result("killed SomethingElse:"+self.ScriptObjectGetID()+"\n");}
void SetKeepID( object self, number id ) { keepID = id; }
void CallOut( object self )
{
result( "SomethingElse object is accessing CallOut...\n" )
object p = GetScriptObjectFromID( keepID )
if ( p.ScriptObjectIsValid() )
{
p.CallFromOutside()
}
}
void CallStop( object self )
{
result( "SomethingElse object is accessing CallOut...\n" )
object p = GetScriptObjectFromID( keepID )
if ( p.ScriptObjectIsValid() )
{
p.StopFromOutside()
}
}
}
// Main script. Create object on separate thread. Then feed it's ID as "weak reference" into the second object.
{
object ob = Alloc(IPersist)
ob.StartThread()
object other = Alloc(SomethingElse)
other.SetKeepID( ob.ScriptObjectGetID() )
other.CallOut()
If ( TwoButtonDialog( "You can either stop IPerstis now, or by pressing SHIFT later.", "Stop now", "later" ) )
other.CallStop()
}
An alternative way would be to have two objects keep references of each other. This is a deadlock-situation one would normally rather avoid, but for the purpose of anchoring it works as well. No object can go out of scope until you release one on purpose.
Again, it is your responsibility to 'release' things when you want a proper shutdown of the system.
The code for the deadlock-situation is rather slim:
class SelfLock
{
object partner
SelfLock( object self ) { result("created SelfLock:"+self.ScriptObjectGetID()+"\n");}
~SelfLock( object self ) { result("killed SelfLock:"+self.ScriptObjectGetID()+"\n");}
void SetPartner(object self, object p) { partner = p; }
void ReleasePartner(object self) { partner = NULL; }
}
{
object p1 = Alloc(SelfLock)
object p2 = Alloc(SelfLock)
p1.SetPartner(p2)
p2.SetPartner(p1)
if ( TwoButtonDialog( "Release partner", "Yes", "No keep locked" ) )
p1.ReleasePartner()
}
I have a setup with a main model (QStandardModel), a proxy model which changes the output of the DisplayRole, and a separate tableview displaying each model. Inside the main model data is a user role that stores a pointer to another QObject which is used by the proxy model to get the desired display value.
I'm running into problems when the object pointed to by that variable is deleted. I am handling deletion in the main model via the destroyed(QObject*) signal. Inside the slot, I search through the model looking for any items that are pointing to the object and delete the reference.
That part works fine on its own but I also have connected to the onDataChanged(...) signal of the proxy model, where I call resizeColumnsToContents() on the proxy model. This then calls the proxy's data() function. Here I check to see if the item has a pointer and, if it does, get some information from the object for display.
The result of all this becomes:
Object about to be deleted triggers destroyed(...) signal
Main model looks for any items using the deleted object and calls setData to remove the reference
Tableview catches onDataChanged signal for the proxy model and resizes columns
Proxy model's data(...) is called. It checks if the item in the main model has the object pointer and, if so, displays a value from the object. If not, it displays something else.
The problem is, at step 4 the item from the main model apparently still hasn't been deleted; the pointer address is still stored. The object the pointer was referencing, though, has been deleted by this point resulting in a segfault.
How can I fix my setup to make sure the main model is finished deleting pointer references before the proxy model tries to update?
Also, here is pseudo-code for the relevant sections:
// elsewhere
Object *someObject = new QObject();
QModelIndex index = mainModel->index(0,0);
mainModel->setData(index, someObject, ObjectPtrRole);
// do stuff
delete someObject; // Qt is actually doing this, I'm not doing it explicitly
// MainModel
void MainModel::onObjectDestroyed(QObject *obj)
{
// iterating over all model items
// if item has pointer to obj
item->setData(QVariant::fromValue(NULL), ObjectPtrRole));
}
// receives onDataChanged signal
void onProxyModelDataChanged(...)
{
ui->tblProxyView->reseizeColumnsToContents();
}
void ProxyModel::data(const QModelIndex &index, int role) const
{
QModelIndex srcIndex = mapToSource(index);
if(role == Qt::DisplayRole)
{
QVariant v = sourceModel()->data(srcIndex, ObjectPtrRole);
Object *ptr = qvariant_cast<Object*>(v);
if(ptr != NULL)
return ptr->getDisplayData();
else
return sourceModel->data(srcIndex, role);
}
}
The problem is ptr is not NULL, but the referenced object is deleted, at the time ProxyModel::data(...) is called so I end up with a segfault.
To avoid dangling pointer dereferences with instances of QObject, you can do one of two things:
Use object->deleteLater - the object will be deleted once the control returns to the event loop. Such functionality is also known as autorelease pools.
Use a QPointer. It will set itself to null upon deletion of the object, so you can check it before use.
I am trying to learn WCF with this example
http://www.codeproject.com/Articles/39143/C-WCF-Client-Server-without-HTTP-with-Callbacks-Ma
Also trying to extend the functionality on the server by adding mutual exclusion with multiple clients.
I am basically trying to have a global array of numbers and a function(which has been exposed with an Operationcontract) that can access this array.But only one client is allowed to access the array at a time.
Can someone point me in the right direction by adding a simple function with a mutual exclusion lock?
Depending on what exactly you want to do, how about putting a lock around the function accessing your array (maybe event put your array into a singleton).
Then you could have
class SingletonClassForYourArray {
object aLock = new object();
int yourArray;
private SingletonClassForYourArray instance;
public SingletonClassForYourArray GetInstance()
{
// normal singleton init of instance on demand
}
int [] YourArray
{
get
{
lock(aLock)
{
return yourArray;
}
}
}
}
This would be the easiest way to have only one client access the array. All clients without the lock will have to wait in turn (fairness not guaranteed). Be careful as this may result in timeouts if clients have to wait to long.
This is how we implement a generic Save() service in WCF for our EF entities. A TT does the work for us. Even though we don't have any problems with it, I hate to assume this is the best approach (even if it might be). You guys seem pretty darn bright and helpful, so I thought I would pose the question:
Is there a better way?
[OperationContract]
public User SaveUser(User entity)
{
bool _IsDeleted = false;
using (DatabaseEntities _Context = new DatabaseEntities())
{
switch (entity.ChangeTracker.State)
{
case ObjectState.Deleted:
//delete
_IsDeleted = true;
_Context.Users.Attach(entity);
_Context.DeleteObject(entity);
break;
default:
//everything else
_Context.Users.ApplyChanges(entity);
break;
}
// now, to the database
try
{
// try to save changes, which may cause a conflict.
_Context.SaveChanges(System.Data.Objects.SaveOptions.None);
}
catch (System.Data.OptimisticConcurrencyException)
{
// resolve the concurrency conflict by refreshing
_Context.Refresh(System.Data.Objects.RefreshMode.ClientWins, entity);
// Save changes.
_Context.SaveChanges();
}
}
// return
if (_IsDeleted)
return null;
entity.AcceptChanges();
return entity;
}
Why are you doing this with Self tracking entities? What was wrong with this:
[OperationContract]
public User SaveUser(User entity)
{
bool isDeleted = false;
using (DatabaseEntities context = new DatabaseEntities())
{
isDeleted = entity.ChangeTracker.State == ObjectState.Deleted;
context.Users.ApplyChanges(entity); // It deletes entities marked for deletion as well
try
{
// no need to postpone accepting changes, they will not be accepted if exception happens
context.SaveChanges();
}
catch (System.Data.OptimisticConcurrencyException)
{
context.Refresh(System.Data.Objects.RefreshMode.ClientWins, entity);
context.SaveChanges();
}
}
return isDeleted ? null : entity;
}
If I'm not mistaken, people typically don't expose their Entity Framework objects directly in a WCF service. Entity Framework is typically thought of as a data-access layer, and WCF is more of a front-end layer, so they are put on different tiers.
A Data-Transfer Object (DTO) is used in the WCF methods. This is typically a POCO which doesn't have any state-tracking on it whatsoever. The DTO is then mapped to an Entity either by hand or via a framework like AutoMapper.
Typically clients should know whether they are "adding" or "updating" an object, and I would personally prefer these to be two separate operations on the service interface. Also, I would definitely require them to use a separate method for deleting an object. However, if you absolutely need a generic "Save", you should be able to tell whether the object you've been given is "new" or not based on the presence (or absence) of a primary key value.
A lot of the code can be put into a generic utility. For example, supposing your T4 template produces attributes on the key values of your entities, you could automatically determine whether the key values are present and perform an Insert/Update accordingly. Also, the try SaveChanges catch retry block you're using--while probably unnecessary--could easily be put into a simple utility method to be more DRY.