Database for live mobile tracking - sql

I'm developing an app that allows to track a mobile device instantly (live) ... I need an of advice. The application must send the location to a webservice that in it's turn records the received data in a database.
What would be, in your opinion, the best way to store the location values?
I'm new in using bigdata and I'm afraid that simple sql requests wont be able to do the work properly ... I imagine if there is lot of users and each user send a request each 1sec I'll have issue with the database ...
An advice ? Thank you very much

i think you could have a look into the geospatial queries in mongo, if you chose to go ahead with mongodb.
Refer here
And here
for the design of the database would depend on the nature of the query (essentially the read and write).
Worth having a look into

Working at Cintric we landed on using elasticsearch. We process billions of location points in real time and provide advanced analytics to our users.
We started with mongoDB and ran into a lot of troubles, eventually leading to a painful migration.
Our stack currently has mobile devices dump location updates into AWS Kinesis, which are then processed by AWS Lambda handlers, and then dumped into elasticsearch. We're able to serve, process and store 300 million requests/month for only a few hundred dollars/month. Analytics for our dashboard add additional cost but for your needs I would highly recommend checking out your options on AWS.

Related

Deploy REST API over multiple servers world-wide, but stay in sync

I've built a REST API with a pretty decent latency. Each request is answered in ~100 ms with a thousand requests per second. This is however with a relatively low physical distance to the data center. The users of this API would, however, be spread all over the globe. From the US for example (to a data center in Germany), the response time for a single request is ~400 ms under no load.
What would be the best approach to deploying this API? I suspect multiple servers at different locations, with a load balancer in front. How would I ensure that the MySQL database would stay in sync between the servers in that case?
With multiple servers and a load balancer, the costs rise exponentially, which is something I can hopefully afford in the future, but not at the moment.
I'd love to hear your ideas!
Afaik. for big projects people use event sourcing with an event storage and microservices and message queues between them or a basic solution is polling the event storage through a simple REST API something like send me the latest events since the last event I received. If you can accept eventual consistency, then I think this approach can work pretty well. It makes writing somewhat slower, but reading can be very fast with it. No need to sync the MySQL databases directly, you just pull the latest events and use a projection to update the local MySQL database. So the event storage is the single source of truth.

PostgreSQL for multiple users

I am building an app for a workshop at a conference. It will be used by the participants to input answers to a survey on their mobile devices and then these answers will be saved to a database.
I am currently looking at PostgreSQL and from what I have seen it is extremely capable of handling well over 100 expected users that I will have using the app at one time. What I haven't been able to decide conclusively is whether these 100 people all adding to the same database at once will cause any problems. I have looked into locks and understand that theres shouldn't be any conflicts when inserting into tables (which is all the users will be doing) but I just wanted to confirm before moving forward with the app.
I assume it is also important to deploy the app using a hosting service which can handle the load. I am intending to use Heroku which I have experience deploying postgreSQL databases to.
Just in case it is relevant I was intending to use Knex.js to build the database in a node backend.
Happy to provide any further information and would appreciate any input or better suggestions to look into.
Cheers,
Tim

What should I specifically test at cloud storage based mobile applications?

I am developing a sensor based mobile application for iOS and Android. The data produced by smart phone sensors will be stored in the cloud. At this point, I am wondering that what I should test about the data transfer and storing. I mean that for example, I should test the scenario as if the connection corrupts while GPS data transfer not finished. I am not looking for the techniques, or testing styles. I am trying to find possible failure points or test scenarios. I hope that I could explain my point.
Below are some of the things worth considering for your app:
Incomplete transfers when connection corrupts (as u mentioned)
Cloud-server size..how much request can it handle at a single instance?
If u are considering cloud solutions, you should also consider the location of your users from where they will be accessing your app. Users and the location of data center will also affect in the response time.
Format of the date to stored. Considering a file size which is fast in i/o will also help optimize the speed of the app.
Asynchronous/Synchronous data transfer
Security measures on the cloud..may be using services like VPC if you are considering AWS
These are some things worth considering.
Thanks :)

Which db should I choose for my transaction logging

I have a database question. I am developing an application where users sends some request and gets an answer from a vendor. I have a server receiving the request (through a rest call or a running web service, haven't decided which yet).
Whenever a new request comes in it should be logged in a database and when the vendor responds the record should be updated indicating whether it was accepted or not and stuff like that. The only reason for this storage of transactions is for reporting and logging purposes. So now that I have stated my requirement I need help from someone with more expertise in this.
What I've come up with so far is that it would be best to use a structured database since all records will have one type and the same information, so there's no need to waste space using a semi-structured database with each record containing both structure and information.
But I don't know if there are any databases that are particularly good for this kind of "create/update operations only" ?? As I said I only need to read the data perhaps once a month or so.
Any inputs are appreciated!
You can use any open source database like postgreSql as you are mostly going to do inserts and not much other features needed. My suggestion will try to put logging process in separate threads rather than the one you are using for processing to have better performance for your api calls.
I'm developing a application with a lot of create/update queries and currently using Neo4j.
It's fast and really good with j2E and php. NoSQL is really fast to learn with it, and the web interface is really user friendly :)

What's the best way to get a 'lot' of small pieces of data synced between a Mac App and the Web?

I'm considering MongoDB right now. Just so the goal is clear here is what needs to happen:
In my app, Finch (finchformac.com for details) I have thousands and thousands of entries per day for each user of what window they had open, the time they opened it, the time they closed it, and a tag if they choose one for it. I need this data to be backed up online so it can sync to their other Mac computers, etc.. I also need to be able to draw charts online from their data which means some complex queries hitting hundreds of thousands of records.
Right now I have tried using Ruby/Rails/Mongoid in with a JSON parser on the app side sending up data in increments of 10,000 records at a time, the data is processed to other collections with a background mapreduce job. But, this all seems to block and is ultimately too slow. What recommendations does (if anyone) have for how to go about this?
You've got a complex problem, which means you need to break it down into smaller, more easily solvable issues.
Problems (as I see it):
You've got an application which is collecting data. You just need to
store that data somewhere locally until it gets sync'd to the
server.
You've received the data on the server and now you need to shove it
into the database fast enough so that it doesn't slow down.
You've got to report on that data and this sounds hard and complex.
You probably want to write this as some sort of API, for simplicity (and since you've got loads of spare processing cycles on the clients) you'll want these chunks of data processed on the client side into JSON ready to import into the database. Once you've got JSON you don't need Mongoid (you just throw the JSON into the database directly). Also you probably don't need rails since you're just creating a simple API so stick with just Rack or Sinatra (possibly using something like Grape).
Now you need to solve the whole "this all seems to block and is ultimately too slow" issue. We've already removed Mongoid (so no need to convert from JSON -> Ruby Objects -> JSON) and Rails. Before we get onto doing a MapReduce on this data you need to ensure it's getting loaded into the database quickly enough. Chances are you should architect the whole thing so that your MapReduce supports your reporting functionality. For sync'ing of data you shouldn't need to do anything but pass the JSON around. If your data isn't writing into your DB fast enough you should consider Sharding your dataset. This will probably be done using some user-based key but you know your data schema better than I do. You need choose you sharding key so that when multiple users are sync'ing at the same time they will probably be using different servers.
Once you've solved Problems 1 and 2 you need to work on your Reporting. This is probably supported by your MapReduce functions inside Mongo. My first comment on this part, is to make sure you're running at least Mongo 2.0. In that release 10gen sped up MapReduce (my tests indicate that it is substantially faster than 1.8). Other than this you can can achieve further increases by Sharding and directing reads to the the Secondary servers in your Replica set (you are using a Replica set?). If this still isn't working consider structuring your schema to support your reporting functionality. This lets you use more cycles on your clients to do work rather than loading your servers. But this optimisation should be left until after you've proven that conventional approaches won't work.
I hope that wall of text helps somewhat. Good luck!