Source: https://refactoring.guru/design-patterns/factory-method
I was wondering what the exact definition of a "free object" was in below context, and what free objects in general meant.
Context
Use the Factory Method when you want to save system resources by reusing existing objects instead of rebuilding them each time.
You often experience this need when dealing with large, resource-intensive objects such as database connections, file systems, and network resources.
Let’s think about what has to be done to reuse an existing object:
First, you need to create some storage to keep track of all of the created objects.
When someone requests an object, the program should look for a free object inside that pool.
… and then return it to the client code.
If there are no free objects, the program should create a new one (and add it to the pool).
That’s a lot of code! And it must all be put into a single place so that you don’t pollute the program with duplicate code.
Free objects are objects of pool that was returned by user into pool or objects located inside pool.
What is pool?
As wiki says about pool:
In computer science, a pool is a collection of resources that are
kept[clarification needed] ready to use, rather than acquired on use
and released[clarification needed] afterwards. In this context,
resources can refer to system resources such as file handles, which
are external to a process, or internal resources such as objects. A
pool client requests a resource from the pool and performs desired
operations on the returned resource. When the client finishes its use
of the resource, it is returned to the pool rather than released and
lost.
You can see an example of source code of ObjectPool<T> of .NET Core here
Related
I am using MFP 8.0, and there are requirements that we want implement cache on the adapter level.
Whenever MFP server starts we want to dump all the database in cache till the server restart again.
Now whenever user hit some transaction or adapter procedure which call database so instead of calling database it must read from cache.
Adapters support read-only and transactional access modes to back-end systems.
Adapters are Maven projects that contain server-side code implemented in either Java or JavaScript. Adapters are used perform
any necessary server-side logic, and to transfer and retrieve
information from back-end systems to client applications and cloud
services.
JSONStore is an optional client-side API providing a lightweight, document-oriented storage system. JSONStore enables persistent storage
of JSON documents. Documents in an application are available in
JSONStore even when the device that is running the application is
offline. This persistent, always-available storage can be useful to
give users access to documents when, for example, there is no network
connection available in the device.
From your description, assuming you are talking about some custom DB where you have data stored, then you need to implement the logic of caching the data.
Adapter's have two classes <AdapterName>Application.java and <AdapterName>Resource.java. <>Application.java contains the lifecycle methods - init() and destroy().
You should put your custom code of loading data from your DB into cache in the init() method. And also take care of removing it in the destroy().
Now during transactional access (which hits <>Resource.java), you refer to the cache you have already created.
Your requirement, however may not be ideal for heavily loaded systems. You need to consider that:
a) Your adapter initialization is delayed. Any wrongly written code can also break the adapter initialization. An adapter isn't available to service your request until it has been initialized. In case of a clustered environment, the adapter load in all cluster members will delayed depending on the amount of data your are loading. Any client request intended for this adapter will get a runtime exception until the initialization is complete.
b) Holding the cache in memory means, so much space in the heap is used up. If your DB keeps growing, this adversely affects adapter initialization and also heap usage.
c) You are in charge maintaining the data at the latest level and also cleaning it up after use.
To summarize, while it is possible, it is not recommended. While this may work in case of very small data set, this cannot scale well. The design of adapters is to provide you transactional access to data/backend systems. You should use the adapter the way it was designed to.
I'm currently working on an app with a reasonably complex Core Data model. The data model currently has 10 tables in it, with a bunch of relationships set between them. The data for the model is obtained piecemeal from a remote server. In order to minimize the amount of traffic to/from the server, the server API passes object ID's first, giving me a chance to discover if I already have stored the objects. If not, then I can ask the server for the full objects and store them. However, those objects can have references to other objects, for which I will need to check follow the same process: check if I have the object(s) and, if not, grab the objects from the server. The Core Data model includes fields for the server IDs which I use to validate and construct Core Data's object graph.
This creates a situation where objects will have been instantiated in Core Data, but won't have been completely constructed as they may be waiting for referenced objects to be returned by the server (which may, in turn, need to wait for their own reference objects).
So my first attempt to deal with this was to create a semaphore that would not allow the object context to be saved (I only save the context in one place) until all objects are downloaded and the object graph is constructed. The problem I ran into was that the context was being saved anyway, without me asking. This results in a ton of changes propagating through NSFetchedResultsController as objects are downloaded from the server and the object graph is being constructed. Moreover, the propagated objects may not be complete.
Has any dealt with anything like this? I think this could all work if I could explicitly control when Core Data saves, but that does not appear to be possible. Or am I missing something?
UPDATE
I was missing something. I was under the impression that NSFetchedResultsController received updates when the Context is saved. This is not true. It receives updates whenever processPendingChanges is called in the context, which occurs at the end of an event cycle. In the past, I've always used two contexts to keep updates separate from the UI, but this project had a deadline and existing code that kept me from refactoring. Given this new information, I think the separate context will fix my problem.
That is an extremely expensive way to sync with a server. Is there a reason your server can't respond to "changed since X" calls and give you everything? In your current design you are spending more time opening and closing sockets than you are receiving data.
Be that as it may, you want to do all of this processing in a secondary context that is connected directly to the NSPersistentStoreCoordinator. When it saves you want to capture the NSManagedObjectContextDidSaveNotification and then have your UI context consume that notification. That will update your UI when your server sync is complete.
This will keep your syncing 100% isolated from the UI and allow the UI to save or do whatever else it needs to do while you are working with the server. I would not use a parent/child design here. There is no reason to.
You access a core data database via the NSManagedObjectContext class.
Each context object must belong to a single thread, and any NSManagedObjects that context creates belong to the same thread.
Do not read or write any managed object from a thread other than the one that created it. If you do, you'll end up with unpredictable and impossible to debug data corruption problems.
However, you can have multiple NSManagedObjectContext instances for a single core data database, each one on a different thread, and you can merge any changes made to the context in one thread over to a context on another thread.
So, basically, you have a "main" NSManagedObjectContext which is used on the main thread, and used for almost all your operations. And then when you need to do something on another thread you create a "child" context for that thread, make all your changes, then merge those changes back to the main context on the main thread.
You can find specific details how to implement this from Apple's official documentation. Start reading here:
https://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/CoreData/Articles/cdConcurrency.html#//apple_ref/doc/uid/TP40003385-SW1
I have rather general question, please advice.
I have a servlet.
This servlet has private field.
Private field is a kind of metadata stuff (public class Metadata{//bla-bla-bla}).
When GET request is processed, this metadata is used to perform some operation.
I want to implement POST method in the same servlet. User uploads file and Metadata field is updated.
The problem: concurrent access to this private field with Metadata object shared among sereval web-threads using one servlet instance. POST method operaton (Update Metadata object) can lead to Metadata inconsistent state and concurrent GET request can be failed.
The question: what is the best way to update Metadata object while GET requests are running?
Dummy solution:
During each GET request,, at the very beginning
Synchonize Metadata object and clone it in one block, then release it.
Concurrent GET requests work with clone verstion of Metadata object which is consistent.
During each POST request.
Synchonize Metadata object and update its fields.
Release Metadata object.
Please advice or critisize.
Using synchronized methods set and get in the Metadata class is fine but may slower your web app in case you have multiple readers and (much) less writers:
Java synchronized keyword is used to acquire a exclusive lock on an
object. When a thread acquires a lock of an object either for reading
or writing, other threads must wait until the lock on that object is
released. Think of a scenerio that there are many reader threads that reads a shared
data frequently and only one writer thread that updates shared data.
It’s not necessary to exclusively lock access to shared data while
reading because multiple read operations can be done in parallel
unless there is a write operation.
(Excerpt from that nice post)
So using a multiple read single write strategy may be better in term of performance in some cases as explained also in the same Java5 ReadWriteLock interface doc:
A read-write lock allows for a greater level of concurrency in
accessing shared data than that permitted by a mutual exclusion lock.
It exploits the fact that while only a single thread at a time (a
writer thread) can modify the shared data, in many cases any number of
threads can concurrently read the data (hence reader threads). In
theory, the increase in concurrency permitted by the use of a
read-write lock will lead to performance improvements over the use of
a mutual exclusion lock. In practice this increase in concurrency will
only be fully realized on a multi-processor, and then only if the
access patterns for the shared data are suitable.
Whether or not a read-write lock will improve performance over the use
of a mutual exclusion lock depends on the frequency that the data is
read compared to being modified, the duration of the read and write
operations, and the contention for the data - that is, the number of
threads that will try to read or write the data at the same time. For
example, a collection that is initially populated with data and
thereafter infrequently modified, while being frequently searched
(such as a directory of some kind) is an ideal candidate for the use
of a read-write lock. However, if updates become frequent then the
data spends most of its time being exclusively locked and there is
little, if any increase in concurrency. Further, if the read
operations are too short the overhead of the read-write lock
implementation (which is inherently more complex than a mutual
exclusion lock) can dominate the execution cost, particularly as many
read-write lock implementations still serialize all threads through a
small section of code. Ultimately, only profiling and measurement will
establish whether the use of a read-write lock is suitable for your
application.
A ready to use implementation is the ReentrantReadWriteLock.
Take a look at the previous post for a nice tutorial on how to use it.
I have a WCF service (instantiated within a Console application on NetTCP), this service has static data (large volume) which gets instantiated on the load.
I have multiple instances of this Console application running at once, and all of them are doing the same static data initialization , is there a way that I can have a single data source and share the data among processes so that each process does not have to consume large amount of memory?
You can use memory mapped files; but each process must have its own memory due to how Windows protects applications.
From http://msdn.microsoft.com/en-us/library/dd997372.aspx:
Non-persisted files are memory-mapped files that are not associated with a file on a disk. When the last process has finished working with the file, the data is lost and the file is reclaimed by garbage collection. These files are suitable for creating shared memory for inter-process communications (IPC).
With any sort of "shared" data, you'll have the additional task of synchronizing access.
The quick solution would be to write another dedicated service which you run first. It would load the data once and makes it available to other service instances as needed.
The more robust solution is to store the data in a database or caching layer that all the services connect to. The caching layer is a nice choice because your service can lazy load it if its not in the cache (keeping more of your current design) and it can be fast (in memory). Some cache options include:
Windows AppFabric
Memcached
NCache
I have a launchd daemon that every so often uploads some data via a web service using NSOperationQueue.
I need to be able to persist this data, both so that it can later be re-uploaded in the event of failure, even between sessions (in case of computer shut down, for example).
This is not a high load application, it probably receives items intermittently no more than 1 or 2 every minute, often with several hour gaps in between.
My initial implementation without this persistence in place is as follows:
Daemon receives data.
Daemon parses data into an object of type MyDataObject.
Daemon creates instance of NSOperation subclass with MyDataObject as the object to upload and adds it to its NSOperationQueue.
NSOperationQueue goes through and uploads MyDataObject via web service as it is able.
This part all functions just fine. The part I now want to add is the persistence in case of web service failure, computer shut down, etc.
It seems like I could use an NSMutableArray of MyDataObjects along with NSKeyed(Un)archiver containing all the items which had not yet been uploaded and observation of the -isFinished key of all the operations to remove items from the array, but it seems like there should be a simpler way to do is, with less room for things to go wrong, especially as far as thread safety goes.
Can somebody point me in the right direction?
You could add two operations per item. The first would store the item to local storage, and the second would depend on the first and would remove the item from local storage on success.
Then, when you want to restore any items from local storage, you create only the store-to-the-cloud operations, not the store-locally operations. As before, they remove the items from local storage only if they succeed, and if they don't succeed, they leave the items in local storage for the next attempt.