I have more of a conceptional question..hopefully that is okay.
Is AsyncStorage ment for repeated calls? For example... I have an application with a slideshow which I want the user to be able to remeber where in the slideshow they were each time they open the app.
I was thinking of using AsyncStore to update the index on it each time it switches slides.. but am worried that means I cam accessing it too much and constantly resetting the index. Is that over the top or is that use in a way of how it is intended to be used?
Thanks!
Your approach is totally okay. I use AsncStorage also for a Slider-Component and save its value to asncStorage after each change-event. But i've bounced it to only one write within 500 ms (the last write will win here).
If you plan to store bigger documents, you should consider, that internally the AsyncStorage stores all keys and values within one huge file. Depending on the overall size it could be get slow and battery consuming then.
Related
I've developed a react native application where users can login, chose different items from a list, see the details to the item (profile) and add/delete/edit different posts (attached to one item).
Now my user base has grown and therefor I have decided to introduce new database tables in order to log each action my users do to analyze the accumulated data later and optimize for example the usability etc.
My first question is: Is there any convention or standard that lists the data to be collected in such a case (like logtime, action, ...)? Don't want to lose any useful data because I've noticed the value of it too late.
And: In which time intervals should an app send the users logdata to my remote server (async requests after each action, daily, before logout...)? Is there any gold standard?
Actually it's more about how much data you would like to collect and if it is matching with your privacy terms and conditions. If you're going to store the data on some server other than yours to analyse it, it is highly recommended that you don' refer to user ids there, clearly for privacy reasons.
About when is the right time to log data, again it depends of the data you would like to track, if you are tracking how many minutes they spend on a screen or how they interact with some posts, you may need to send those regularly to your server depending on your needs: whether you want to analyse the data instantly to improve the user experience (show more relevant posts) or just to use the data later. If the data you need to analyse is not really that much, you can do it after each call, if you're planning to track huge amounts of data that you don't need right away, probably you could choose to send the data at time frames where you don't have a big load on your server (to save bandwidth, you can choose night time (it's a little bit more complicated than that))
I'm creating a RESTful web service (in Golang) which pulls a set of rows from the database and returns it to a client (smartphone app or web application). The service needs to be able to provide paging. The only problem is this data is sorted on a regularly changing "computed" column (for example, the number of "thumbs up" or "thumbs down" a piece of content on a website has), so rows can jump around page numbers in between a client's request.
I've looked at a few PostgreSQL features that I could potentially use to help me solve this problem, but nothing really seems to be a very good solution.
Materialized Views: to hold "stale" data which is only updated every once in a while. This doesn't really solve the problem, as the data would still jump around if the user happens to be paging through the data when the Materialized View is updated.
Cursors: created for each client session and held between requests. This seems like it would be a nightmare if there are a lot of concurrent sessions at once (which there will be).
Does anybody have any suggestions on how to handle this, either on the client side or database side? Is there anything I can really do, or is an issue such as this normally just remedied by the clients consuming the data?
Edit: I should mention that the smartphone app is allowing users to view more pieces of data through "infinite scrolling", so it keeps track of it's own list of data client-side.
This is a problem without a perfectly satisfactory solution because you're trying to combine essentially incompatible requirements:
Send only the required amount of data to the client on-demand, i.e. you can't download the whole dataset then paginate it client-side.
Minimise amount of per-client state that the server must keep track of, for scalability with large numbers of clients.
Maintain different state for each client
This is a "pick any two" kind of situation. You have to compromise; accept that you can't keep each client's pagination state exactly right, accept that you have to download a big data set to the client, or accept that you have to use a huge amount of server resources to maintain client state.
There are variations within those that mix the various compromises, but that's what it all boils down to.
For example, some people will send the client some extra data, enough to satisfy most client requirements. If the client exceeds that, then it gets broken pagination.
Some systems will cache client state for a short period (with short lived unlogged tables, tempfiles, or whatever), but expire it quickly, so if the client isn't constantly asking for fresh data its gets broken pagination.
Etc.
See also:
How to provide an API client with 1,000,000 database results?
Using "Cursors" for paging in PostgreSQL
Iterate over large external postgres db, manipulate rows, write output to rails postgres db
offset/limit performance optimization
If PostgreSQL count(*) is always slow how to paginate complex queries?
How to return sample row from database one by one
I'd probably implement a hybrid solution of some form, like:
Using a cursor, read and immediately send the first part of the data to the client.
Immediately fetch enough extra data from the cursor to satisfy 99% of clients' requirements. Store it to a fast, unsafe cache like memcached, Redis, BigMemory, EHCache, whatever under a key that'll let me retrieve it for later requests by the same client. Then close the cursor to free the DB resources.
Expire the cache on a least-recently-used basis, so if the client doesn't keep reading fast enough they have to go get a fresh set of data from the DB, and the pagination changes.
If the client wants more results than the vast majority of its peers, pagination will change at some point as you switch to reading direct from the DB rather than the cache or generate a new bigger cached dataset.
That way most clients won't notice pagination issues and you don't have to send vast amounts of data to most clients, but you won't melt your DB server. However, you need a big boofy cache to get away with this. Its practical depends on whether your clients can cope with pagination breaking - if it's simply not acceptable to break pagination, then you're stuck with doing it DB-side with cursors, temp tables, coping the whole result set at first request, etc. It also depends on the data set size and how much data each client usually requires.
I am not aware of a perfect solution for this problem. But if you want the user to have a stale view of the data then cursor is the way to go. Only tuning you can do is to store only the data for 1st 2 pages in the cursor. Beyond that you fetch it again.
i have in my application a listview that shows the transport events , this listview should be updated every one second , to follow up the events.
i simply do that by a timer (1000 interval) that declare one connetion object,dataReader... and then fill the listview ,finally, i dispose the connection and another objects (this is every one timer tick).
Now, is there any better way to do that ? maybe better for performance,memory or other somethings?
i'am not expert, so i thinked maybe that is declaring many conncetions every second may making some memory problems :) (correct me if that is wrong)
DataBase Access 2007
VS 2012
Thank You.
Assuming that you are using ADO.NET to access your database, your access model should be fine, because .NET uses connection pooling to minimize performance impacts of closing and re-opening DB connections.
Your overall architecture may be questioned, however: polling for updates on a timer is usually not the best option. A better approach would be maintaining an update sequence in a separate table. The table would have a single row, with a single int column initially set to zero. Every time an update to the "real" data is made, this number is bumped up by one. With this table in place, your program could read just this one number every second, rather than re-reading your entire data set. If your program detects that the number is the same as it was the previous time, it stops and waits for the next timer interval; otherwise, it re-reads the data set.
I am a developing a Windows 8 metro application that has a set of settings that the user can specify. I know of a few ways to store these settings in the local storage so they can be restored when the user resumes/re-starts the application.
What I want to know is when should I store this data? Periodically? On Application Close/Crash? When exactly? What are the conventions?
I'm not aware of any convention / best practice.
The most convenient way is to have all application data in one big class instance, deserialize it at startup and serialize it on close/suspend. This way you need only few lines of code and nearly no logic. A positive side effect is that during operation the app isn't slowed down by loading/saving.
However when the class gets too big, you might experience a noticable increase of startup/shutdown times of your app. This could ultimately lead to being rejected from marektplace. In this case I recommend to save each small bit of information (e.g. a single user setting) instantly, and to load each small bit of information not before it's required.
I would have thought that to some extent that depends on the data. However, you will need to store the current state of the app on the Suspending event (which may also be the close event).
I'm considering MongoDB right now. Just so the goal is clear here is what needs to happen:
In my app, Finch (finchformac.com for details) I have thousands and thousands of entries per day for each user of what window they had open, the time they opened it, the time they closed it, and a tag if they choose one for it. I need this data to be backed up online so it can sync to their other Mac computers, etc.. I also need to be able to draw charts online from their data which means some complex queries hitting hundreds of thousands of records.
Right now I have tried using Ruby/Rails/Mongoid in with a JSON parser on the app side sending up data in increments of 10,000 records at a time, the data is processed to other collections with a background mapreduce job. But, this all seems to block and is ultimately too slow. What recommendations does (if anyone) have for how to go about this?
You've got a complex problem, which means you need to break it down into smaller, more easily solvable issues.
Problems (as I see it):
You've got an application which is collecting data. You just need to
store that data somewhere locally until it gets sync'd to the
server.
You've received the data on the server and now you need to shove it
into the database fast enough so that it doesn't slow down.
You've got to report on that data and this sounds hard and complex.
You probably want to write this as some sort of API, for simplicity (and since you've got loads of spare processing cycles on the clients) you'll want these chunks of data processed on the client side into JSON ready to import into the database. Once you've got JSON you don't need Mongoid (you just throw the JSON into the database directly). Also you probably don't need rails since you're just creating a simple API so stick with just Rack or Sinatra (possibly using something like Grape).
Now you need to solve the whole "this all seems to block and is ultimately too slow" issue. We've already removed Mongoid (so no need to convert from JSON -> Ruby Objects -> JSON) and Rails. Before we get onto doing a MapReduce on this data you need to ensure it's getting loaded into the database quickly enough. Chances are you should architect the whole thing so that your MapReduce supports your reporting functionality. For sync'ing of data you shouldn't need to do anything but pass the JSON around. If your data isn't writing into your DB fast enough you should consider Sharding your dataset. This will probably be done using some user-based key but you know your data schema better than I do. You need choose you sharding key so that when multiple users are sync'ing at the same time they will probably be using different servers.
Once you've solved Problems 1 and 2 you need to work on your Reporting. This is probably supported by your MapReduce functions inside Mongo. My first comment on this part, is to make sure you're running at least Mongo 2.0. In that release 10gen sped up MapReduce (my tests indicate that it is substantially faster than 1.8). Other than this you can can achieve further increases by Sharding and directing reads to the the Secondary servers in your Replica set (you are using a Replica set?). If this still isn't working consider structuring your schema to support your reporting functionality. This lets you use more cycles on your clients to do work rather than loading your servers. But this optimisation should be left until after you've proven that conventional approaches won't work.
I hope that wall of text helps somewhat. Good luck!