I am working on a mobile game that uses Couchbase mobile on the client and Iris Couch to store data on the cloud. In order for the app to work off line each user has a database on the server which is replicated on the device. Everything looks promising except for one detail: when the data is replicated on the deice it takes too much disk space. For example, a remote database contains 400 documents and it is approximately 100 MB, the replicated database on the device is approximately 390 MB with the same number of documents. Has anyone experienced such issue? Any help would be appreciated.
TouchDB stores data in a completely different format than CouchDB — it actually writes into SQLite tables. Another factor that may be coming into play is the lack of snappy compression on the TouchDB side, which CouchDB recently added and significantly decreased its on-disk requirements.
Considering your documents seem to be relatively large, I suspect that the difference you are seeing may be mostly compressed vs. raw related. You could try filing a feature request for similar compression to be implemented on the TouchDB project if you can determine that this is indeed where the difference is coming from.
Related
I am trying to do something new, something I have never done before. I am looking for advice or point me into right direction how to choose technology. I am trying to build race simulation app that will have thousands of iot devices streaming data into central platform. While I understand that I can use some sort of IOT hub with cloud providers, but what technology do I choose for storing data?
Example is online indoor biking app. There are apps where you can connect your indoor bike online and have simulated race. For my project I am trying to build something similar. Do I use NO SQL db in this scenario? What technology will allow better scale of application like this since it could be millions of devices around the world in "simulated" race. I am not worried about front-end and things like that, but backend, IOT hub, storing data, presenting-real time?
At this point it is important to understand what kind of data your IoT devices will stream, and at what kind of a rate. It will have significant impact on your question.
That it is if it's just location information and some other small data sent lets say once a second, then if you're talking about tens of thousands of devices - this is not a big load of information, and any standard database, like MySQL will be able to deal with it. You will of course need a multi-threaded server(s) capable of handling many requests in parallel.
If your IoT devices will stream HD video, then you're looking at a completely different solution, with a much stronger server, capable of handling allot of streams in parallel, with significant bandwidth requirements from your hosting company, as well as storage space for all the videos. In this case you will store the streams as files (if you'll need them later on), and you won't need any special database either.
In any case, once you'll reach millions of users, you'll be able to scale most modern databases and servers, like MySQL replication capability. For example, take a look how Wikipedia is relying on MySQL: wikipedia - MySQL https://www.mysql.com/why-mysql/case-studies/mysql-cs-wikipedia.html
So I wouldn't be worried regarding the database on this stage, but make sure that the design of my system is in accordance to the the type of data and rate it is streamed.
Hope this gives you a pointer.
I'm developing an app that allows to track a mobile device instantly (live) ... I need an of advice. The application must send the location to a webservice that in it's turn records the received data in a database.
What would be, in your opinion, the best way to store the location values?
I'm new in using bigdata and I'm afraid that simple sql requests wont be able to do the work properly ... I imagine if there is lot of users and each user send a request each 1sec I'll have issue with the database ...
An advice ? Thank you very much
i think you could have a look into the geospatial queries in mongo, if you chose to go ahead with mongodb.
Refer here
And here
for the design of the database would depend on the nature of the query (essentially the read and write).
Worth having a look into
Working at Cintric we landed on using elasticsearch. We process billions of location points in real time and provide advanced analytics to our users.
We started with mongoDB and ran into a lot of troubles, eventually leading to a painful migration.
Our stack currently has mobile devices dump location updates into AWS Kinesis, which are then processed by AWS Lambda handlers, and then dumped into elasticsearch. We're able to serve, process and store 300 million requests/month for only a few hundred dollars/month. Analytics for our dashboard add additional cost but for your needs I would highly recommend checking out your options on AWS.
I am running an Azure web role, which is storing very small blobs into Azure storage. (Blob upload is being done from the server, not from the browser.) I have searched stack overflow and the rest of the internet for tips on optimizing blob storage performance, and I believe I've checked and implemented all of the usual suspects: uploading async, allowing unlimited outgoing web connections (which now seems to be the default setting on web roles and no longer needs to be explicitly set in web.config or in code).
Tweaking the number of concurrent uploads I allow makes some difference, but regardless of what I've tried, I seem to max out at around 1,000 blob uploads per second. This is when running in the Azure web role, in the same region as the storage account (East US). My rate when running this from home over a good internet connection isn't much less, ~700 blobs/sec, which seems to tell me that it's not the network latency that's limiting the rate, it's the actual processing time of the storage service.
I wouldn't normally consider these rates horrible for this kind of a service, but I've read that Microsoft boasts a rate of ~20,000 storage transactions per second, so I've been a little disappointed with these results.
I'd like to get some feedback from those who have really tried to push the limits of blob storage. Does ~1000 small uploads per second sound about right? Or is there possibly something else I should be doing to improve this? I'll post the code if I need to, but I'd rather not receive speculative answers, I'd like to hear from developers who can either confirm that my results are reasonable, or that they've seen much higher throughput.
I should add that I'm currently running this in a small web role. I've tried it also in a medium web role, and didn't see any significant difference.
EDIT:
After a few days of development and testing, my upload rate seemed to suddenly increase. Not by a lot, but maybe by another ~200 per second. In looking around the web, I noticed a comment in the Azure documentation stating "A storage account scales automatically as usage increases." So I'm wondering if it really is capable of much higher rates, but will not automatically scale up until it sees sustained period of high volume. Some confirmation of that would also be greatly appreciated.
Depending on how small your requests are the problem might be caused by Nagle’s Algorithm is Not Friendly towards Small Requests - although usually I see that with queues / table operations. Try disabling Nagle's and let me know if that makes any difference. As an fyi, you have to disable it prior to establishing the connection otherwise the changes will not take effect.
Jason
I have a Web Application (Java backend) that processes a large amount of raw data that is uploaded from a hardware platform containing a number of sensors.
Currently the raw data is uploaded and the data is decompressed and stored as a 'text' field in a Postgresql database to allow the users to log in and generate various graphs / charts of the data (using a JS charting library clientside).
Example string...
[45,23,45,32,56,75,34....]
The arrays will typically contain ~300,000 values but this could be up to 1,000,000 depending on how long the sensors are recording so the size of the string being stored could be a few hundred kilobytes
This currently seems to work fine for now as there are only ~200 uploads per day but as I am looking at the scalability of the application and the ability to backup the data I am looking at alternatives for storing this data
DynamoDB looked like a great option for me as I can carry on storing the uploads details in my SQL table and just save a URL endpoint to be called to retrieve the arrays....but then I noticed the item size is limited to 64kb
As I am sure there are a million and one ways to do this I would like to put this out to the SO community to hear what others would recommend, either web services or locally stored....considering performance, scalability, maintainability etc etc...
Thanks in advance!
UPDATE:
Just to clarify the data shown above is just the 'Y' values as it is time-sampled the X values are taken as the position in the array....so I dont think storing as a tuple would have any benefits.
If you are looking to store such strings, you probably want to use S3 (1 object containing
the array string), in this case you will have "backup" out of the box by enabling bucket
versioning.
You can try tuple of Couchbase and ElasticSearch. Couchbase is very fast document-oriented NoSql database. Several thousands of insert operation is normal for CB. Item size is limited to 20MB. Performance of "get" operation is several tens of thousands. There is one disadvantage, you can query data only by id (there is "view", but I think it will be too difficult to adapt them to the plotting). Compensate for this deficiency may ElasticSearch, that can perform any query very fast. Format data in Couchbase and ElasticSearch is json-document.
I have just come across Google Cloud Datastore, which allows me to store single item Strings up to 1Mb (un-indexed), seems like a good alternative to Dynamo
May be you should use Redis or SSDB, both are designed to store large list(array) of data. The difference between these two databases is that Redis is memory only(disk for backup), but SSDB is disk based and uses memory as cache.
I was wondering if there are memory limits for metro style apps. I am not talking about the RAM. That I already found out has a limit of 150 mb, right?
I want to know, if there is a restriction of using the local memory (the hard discstorage). I am creating a database to save alot of data. Can i do so until the device runs out of storage? (I am not actually planning to do so. But occupying like 80mb of the memory would be nice)
Thanks in advance.
There is no limit on local data.
Local application data should be used for any information that needs to be preserved between application sessions and is not suitable type or size wise, for roaming application data. Data that is not applicable on other devices should be stored here as well. There are no general size restriction on local data stored. Location is available via the localFolder property. Use the local app data store for data that it does not make sense to roam and for large data sets.
From here. There is a limit on roaming data. Same document has that.