I have two COM objects (let's call them Control and Job). Control is CoCreatable, Job objects are created by Control.NewJob().
Control has a method Control.Start(job) that makes the specified job the current job. It remains the current job as long as no other job is set.
Now for the client the following behavior appears reasonable for these particular controls:
as long as one of its Jobs exist, Control exists
(trivial: Job holds strong ref to the Control it created)
As long as the client has either a reference to Control or its CurrentJob, neither gets destroyed
("trivial": CurrentJob is a strong ref)
Client should not need to "clear" CurrentJob before releasing references
Now, I have a classic circular reference here, the condition to release it would be both objects not having external references.
I can solve this scenario by mucking around with ATL's InternalRelease implementation, but this is pretty ugly and isolated.
Any suggestions? Existing solutions?
as long as one of its Jobs exist, Control exists
No, fairly sure that this is the rule where you went wrong. Keep your eyes on the ball, the only reason you added the IControl::CreateJob() factory function is to give CJob (the implementation class, not the interface) a CControl* reference. This implies ownership, a particular IJob can only ever be associated with a particular Control instance. CControl should therefore keep a collection of CJobs that it owns.
It now becomes straight-forward:
The CControl instance is created as normal, there is only one AddRef() call, only the app owns the instance.
The IControl::CreateJob() implementation method creates a new CJob with the CJob(Control*) constructor, adds it to the collection and returns the IJob interface pointer. There is only one AddRef() call, only the client app owns the instance.
The CJob destructor calls a private CControl::RemoveJob() method to get itself removed from the collection, using the CControl* it got from its constructor if it is not null.
The CControl destructor iterates its collection, setting the CControl* that any remaining CJob maintains back to null. They are now un-owned.
The IControl::Start(IJob*) implementation method calls AddRef() to ensure the job stays alive. Release when the job is done or is replaced or the CControl object is destructed. Note that an error check is required, the method needs to iterate the collection to find the matching CJob object. So the client cannot start a job that was created by another Control instance. And allowing you to keep a CJob* as the CControl::currentJob private variable.
I don't think ATL has out of the box solution since the behavior in question is sensitive to specific requirements.
As long as the client has either a reference to Control or its CurrentJob, neither gets destroyed ("trivial": CurrentJob is a strong ref)
This requirement assumes that an external strong reference to Control exists from the side of the current Job, since control needs to stay alive with client to job reference only. That is, both control and job have strong references one to another. And then control+job combination needs to correctly handle release of all external references. For instance, this can be achieved the following way.
Job's (and the same way Control's?) CComObjectRootEx::InternalRelease is overridden and it checks whether the only external reference remained is the control's one. If this is the case, the job is initiating termination check - it calls certain control's method to check its references. If job's reference is the only reference control sees on itself, then both release references one to the other (and terminate).
Related
I have a binding like the following in my kernel
kernel.Bind<IMyDependency>().To<MyDependencyImplementation>();
In a single app domain, we multiple calls to kernel.Get<IMyDependency>() are made, does Get<> returns a shared instance or a new every time?
We discovered a thread-safety issue in one of our dependencies that teams are working on rectifying, but in the interim if we can get ninject to distribute one separate object (not shared) per Get<> call, it could save the day for us.
Is there any way in Ninject to say for one particular dependency to return a new instance (or at least a non-shared one) with every Get<> Call?
does Get<IMyDependency> returns a shared instance or a new every time?
You get a new instance of MyDependencyImplementation on each call. This is the default behaviour unless the call to Bind() has been made with a specific scope.
You may have a look at the Scopes documentation.
I have searched a lot about these question, here and a lot of other places, but not getting everything I want to know!
From a WebApi project point-of-view, when are InTransientScope objects Created? In the Ninject docs it is stated that such objects are created whenever requested, but in a web api project that handles HTTP requests, the instance is created at the request start time so in this regard it is the same as InRequestScope then?
In a WebApi project, is it okay to use InTransientScope objects knowing that they will never be kept track of by Ninject? If Ninject never keeps track of Transient objects, then what is the purpose of this scope and what happens to such objects after they have been used?
If I declare an object with InRequestScope and that object doesn't implement the IDisposable interface, what happens to such object after the web request has completed? Will it be treated the same way as an InTransientScope object?
Are different scopes to be used for: WebApi controllers, Repositories(that use a InRequestScope Session that is created separately) and Application services?
There's two purposes for scopes:
Only allow one object to be created per scope
(optionally) dispose of the object once the scope ends.
As said, the disposal is optional. If it doesn't implement the IDisposable interface it's not being disposed. There's plenty of usecases for that.
The InTransientScope is the default scope - the one being used if you don't specify another one. It means that every time a type A is requested from the kernel one activation takes place and the result is returned. The activation logic is specified by the binding part that follows immediately after the Bind part (To<...>, ToMethod(...),...).
However, this is not necessarily at the time the web-request starts and the controller is instanciated. For example, you can use factories or service location (p.Ex. ResolutionRoot.Get<Foo>()) to create more objects after the controller has been created. To answer your questions in short:
When: When a request takes place or whenever your code asks for a type from Ninject either directly (IResolutionRoot.Get(..)) or through a factory. As InTransientScope objects are not being tracked they will not be disposed, however, if they are not disposable and the entire request code requests only one IFoo then practically there's is no discernible difference (apart from the slight performance hit due totracking InRequestScope()-ed objects)
As long as you don't need to make sure that instances are shared and/or disposed this is completely fine. After they are not being used anymore, they will get garbage-collected like any object you would new yourself.
When the scope ends ninject will remove the weak reference to the non-IDisposable object. The object itself will not be touched - just like when bound InTransientScope()
That depends on your specific requirements and implementation details. Generally one needs to make sure that long-scoped objects don't depend on short-scoped objects. For example, a Singleton-Service should not depend on a Request-scoped object. As a baserule, everything should be InTransientScope() unless there's a specific reason why it should not be. The reason will dictate what scope to use...
I am studying CORBA and how IDL maps interfaces to different languages. I read that you can not write constructors and destructors in an IDL interface because objects are not created locally.
My question is:
How can a client delete an object if he does not specify a destructor in the IDL interface, is the server only responsible for deleting objects? Does CORBA provide a garbage collection mechanism/specification or is the language on the server side responsible to do that? If only server is responsible to delete objects how can it be sure that an object should be deleted? Pinging the client?
An e-mail replay from one of my professors:
All lifecycle management of CORBA objects is done
by the object adapter. There is no built-in garbage
collection in CORBA (except that non-persistent objects
are deactivated and removed automatically when the session expires
or hangs, or when a time limit has expired). The servant object
deregistration method deactivate_object() should be explicitly
called on the OA (in server code) to make the OA
deregister/deallocate an object properly
(after awaiting that all possibly still
running calls on that object have terminated).
For simulating remote constructor behavior, a (server-side)
factory object (another CORBA object) should be used.
For simulating remote destructor behavior, the factory
object might provide an explicit destroy method (user-level
memory management controlled by client) or implement
reference counting for garbage collection at user level
(controlled by the server). The latter is tricky
because the ordering with the servant deregistration
call to the OA (deactivate_object()) must be correct.
Say your code receives an instance from an external source, and you had no control over how the instance was created. The instance does not implement INotifyPropertyChanged. Is there an adapter you can pass it to, as in:
var adapter = new ChangeNotifierAdapter( instance );
such that the adapter implements INotifyPropertyChanged and will thereafter raise its PropertyChanged event for all property changes of instance?
If you can guarantee that all changes to the instance will go via your wrapper, then you can use a proxy - either a dynamic one or one generated at design time (nb: if you have to still expose the concrete class rather than an interface it'll have to be a dynamic proxy).
If that's not true (or even if it is, but changes to one property affect the value of another) then the only way to achieve this is via polling. The wrapper has to periodically poll all the properties of the object, determine which have changed and raise events accordingly. This is messy, and can be a serious battery drain on mobile devices.
Both suck of course. Mapping to an object that does implement it is generally a better solution.
I am using a third-party application, and making a call to create an instance of my COM object. This call is succeeding, however the function on the third-party application does not return a pointer to the created object (I have no idea why). Is there any way to get a pointer to my object?
To clarify, here's some pseudo-code:
// This function has no return value!
ThirdPartyApp.CreateObject("MyObject");
When your object is created, make it store a reference to itself in a global variable or some other sort of shareable storage location. Then export a function from your COM DLL that will read from that location so you can call it and get a reference to the previously created object.
This shared reference should not increase the reference count of the object, or else it will never get destroyed. When your object gets destroyed, make sure you clear that shared reference.
If you can have more than one instance of this object in the same process, then you may need to manage a list instead of just a single global variable.
Store your return value HRESULT (is this C++?), it may yield a clue.
Sometimes there are sneaky tricky marshalling/creation problems if you are calling a factory-type object which exists in another apartment.