Store Global Data Android App With Service - singleton

I have an android app that consists of several activities / fragments and one service. In one of the activities I create a new variable which I need to access from most of the other activities and the service. What is the best way to handle this so that even if the app is closed and reopened the value persists. Currently I pass the variable to my service and then each activity has to use a Messenger to query the service and get the value back. I am wondering if there is a more efficient way of doing this that doesn't require that each activity bind to the service to get the one value.
Possible Solutions:
1. Singleton - Will this survive the app being closed?
2. Extending Application and storing the value there - Seems like this is discouraged especially for such a simple use case.
3. Database the field locally and then just query it when needed, might be ok but might also be overkill.
4. Combination of 1 and 3 where I have a singleton which returns the value if it has it and if it doesn't then it will query the db and get the value. This way the db only has to be queried once as long as the app is running and the value will be persisted through app closes.
Any thoughts?
Thanks,
Nathan

Singleton - Will this survive the app being closed?
It will live as long as your process lives.
Extending Application and storing the value there - Seems like this is discouraged especially for such a simple use case.
It adds no value over the singleton option in this case.
Database the field locally and then just query it when needed, might be ok but might also be overkill
If you need the data to survive your process being terminated, you will want to persist it somehow (database, file, SharedPreferences). However, your option 4 (using a singleton cache) will be more efficient.

Related

NHibernate Second Level Cache with database change notification on desktop App

I am developing a WPF application using NHibernate to communicate with a PostgreSQL Database.
The only caching provider that works on a desktop app is Bamboo Prevalence (correct me if I am wrong). Given that every computer running my application will have different Session Factory, my application retrieves stale data from the cache.
My question is, how can I tell NHibernate/Prevalence to look at the timestamp of when the data was last updated, and if the cache is stale, refresh it?
Well, I found out that there is no way the Second Level cache can know if the database was changed outside Nhibernate/Cache, so what I did was creating a new column 'Timestamp' on all my tables.
On my queries, I first select the timestamp of the db using Session.Cachemode(CacheMode.Ignore) to get the timestamp of the db and I compare with the result from the cache. In the case the timestamps differ, I invalidate the cache for that query and run it again.
About the SysCache, even knowing it 'can work' on a WPF desktop app, I was not keen to use System.Web.Cache as my application would need the the complete .Net Framework instead of the Client Profile. I did a search and for my happiness someone wrote a Nhiberate cache proviver that implements (System.Runtime.Caching), witch is not a ASP.Net component. If anyone is interested you can find the source at:
https://github.com/Leftyx/nhcontrib/tree/master/src/NHibernate.Caches/MemoryCache
Well that is a property that you could set at the cache level and expire items according to your applications needs and then have the cache. Ncache is a possible L2 cache provider for NHibernate. NCache ensures that its cache is consistent across multiple servers and all cache updates are synchronized correctly so no data integrity issues arise. To learn more please visit:
http://www.alachisoft.com/ncache/nhibernate-l2cache-index.html

When to store local data in my Windows 8 Metro Application

I am a developing a Windows 8 metro application that has a set of settings that the user can specify. I know of a few ways to store these settings in the local storage so they can be restored when the user resumes/re-starts the application.
What I want to know is when should I store this data? Periodically? On Application Close/Crash? When exactly? What are the conventions?
I'm not aware of any convention / best practice.
The most convenient way is to have all application data in one big class instance, deserialize it at startup and serialize it on close/suspend. This way you need only few lines of code and nearly no logic. A positive side effect is that during operation the app isn't slowed down by loading/saving.
However when the class gets too big, you might experience a noticable increase of startup/shutdown times of your app. This could ultimately lead to being rejected from marektplace. In this case I recommend to save each small bit of information (e.g. a single user setting) instantly, and to load each small bit of information not before it's required.
I would have thought that to some extent that depends on the data. However, you will need to store the current state of the app on the Suspending event (which may also be the close event).

WCF InstanceContextMode: Per Call vs. Single in this scenario

I want to avoid generating duplicate numbers in my system.
CreateNextNumber() will:
Find the last number created.
Increment the value by one.
Update the value in the database with the new number.
I want to avoid two clients calling this method at the same time. My fear is they will pull the same last number created, increment it by one, and return the duplicate number for both clients.
Questions:
Do I need to use single mode here? I'd rather use Per Call if possible.
The default concurrency mode is single. I don't understand how Per Call would create multiple instances, yet have a single thread. Does that mean that even though multiple instances are created, only one client at a time can call a method in their instance?
If you use InstanceContextMode.Single and ConcurrentcyMode.Single your service will handle one request at a time and so would give you this feature - however, this issue is better handled in the database
Couple of options:
make the field that requires the unique number an identity column and the database will ensure no duplicate values
Wrap the incrementing of the control value in a stored procedure that uses isolation level RepeatableRead and read, increment and write in a transaction
for your questions you might find my blog article on instancing and concurrency useful
Single instance will not stop the service from handling requests concurrently, I don't think. You need a server side synchronisation mechanism, such as a Mutex, so that all code that tries to get this number first locks. You might get away with a static locking object inside the service code actually, which will likely be simpler than a mutex.
Basically, this isn't a WCF configuration issue, this is a more general concurrency issue.
private static object ServiceLockingObject = new object();
lock (ServiceLockingObject)
{
// Try to increment your number.
}
Don't bother with WCF settings, generate unique numbers in the database instead. See my answer to this question for details. Anything you try to do in WCF will have the following problems:
If someone deploys multiple instances of your service in a web farm, each instance will generate clashing numbers.
If there is a database error during the reading or writing of the table, then problems will ensue.
The mere act of reading and writing to the table in separate steps will introduce massive concurrency problems. Do you really want to force a serializable table lock and have everything queue up on the unique number generator?
If you begin a transaction in your service code, all other requests will block on the unique number table because it will be part of a long-running transaction.

Sharing variables across multiple sessions

I know I cannot have a global variable in my backend code (java or php or something else) and have different users (and hence sessions) see the same value. If I need to share some values across these user sessions I need to write them to a DB and read it out every time. This seems awfully wasteful to me.
I understand that an apache process (or the app server) will fork and so having global values will not work but if I am looking at a specialized application is there a web server that lets me do this? This should be possible in a web server that uses threads instead of forking processes. But if I need to share global memory I will need to have some kind of locks to properly access them. I understand that it could (and mostly will) get really buggy but will it degrade performance compared to a DB?
Thoughts?
Pav
I'm not sure that's entirely true. Apache will handle each user connection individually - correct. However, I know that in Java it is possible to have a Singleton object that exists for the life of the application, in which you could potentially store values to be used across all user sessions.
When handling each user connection on the server side, each access to this Singleton will access the same object - therefore the same values.
You might want to do some more research into application scope objects as well. I'm not sure exactly what you're trying to achieve due to lack of a use case, but you may find that Java web apps can do more than you expect in this area.

Timer-based event triggers

I am currently working on a project with specific requirements. A brief overview of these are as follows:
Data is retrieved from external webservices
Data is stored in SQL 2005
Data is manipulated via a web GUI
The windows service that communicates with the web services has no coupling with our internal web UI, except via the database.
Communication with the web services needs to be both time-based, and triggered via user intervention on the web UI.
The current (pre-pre-production) model for web service communication triggering is via a database table that stores trigger requests generated from the manual intervention. I do not really want to have multiple trigger mechanisms, but would like to be able to populate the database table with triggers based upon the time of the call. As I see it there are two ways to accomplish this.
1) Adapt the trigger table to store two extra parameters. One being "Is this time-based or manually added?" and a nullable field to store the timing details (exact format to be determined). If it is a manaully created trigger, mark it as processed when the trigger has been fired, but not if it is a timed trigger.
or
2) Create a second windows service that creates the triggers on-the-fly at timed intervals.
The second option seems like a fudge to me, but the management of option 1 could easily turn into a programming nightmare (how do you know if the last poll of the table returned the event that needs to fire, and how do you then stop it re-triggering on the next poll)
I'd appreciate it if anyone could spare a few minutes to help me decide which route (one of these two, or possibly a third, unlisted one) to take.
Why not use a SQL Job instead of the Windows Service? You can encapsulate all of you db "trigger" code in Stored Procedures. Then your UI and SQL Job can call the same Stored Procedures and create the triggers the same way whether it's manually or at a time interval.
The way I see it is this.
You have a Windows Service, which is playing the role of a scheduler and in it there are some classes which simply call the webservices and put the data in your databases.
So, you can use these classes directly from the WebUI as well and import the data based on the WebUI trigger.
I don't like the idea of storing a user generated action as a flag (trigger) in the database where some service will poll it (at an interval which is not under the user's control) to execute that action.
You could even convert the whole code into an exe which you can then schedule using the Windows Scheduler. And call the same exe whenever the user triggers the action from the Web UI.
#Vaibhav
Unfortunately, the physical architecture of the solution will not allow any direct communication between the components, other than Web UI to Database, and database to service (which can then call out to the web services). I do, however, agree that re-use of the communication classes would be the ideal here - I just can't do it within the confines of our business*
*Isn't it always the way that a technically "better" solution is stymied by external factors?