I need to get large data (~100MB) from sql server into app's sqlite once a day wirelessly.
App has json/restful webservice for other things, but figured this isn't possible as 100MB loaded into memory via json object would cause memory crash when I try to write json to sqlite.
I am now considering retrieving a file from url and saving it locally. That way data isn't loaded into memory.
The part I get fuzzy on is best way to get data, ie download a .sqlite compressed file or download a text file with prepared insert statements to an existing sqlite. Pretty sure the prior is best choice, but not DB savy enough to know what's possible on the sql server. Is it possible for a sql server to select data subsets and create a sqlite file? maybe it just needs to be scripted.
One thing to consider is the structrue of data on the sql server. I need subsets of data from several tables, not the entire tables. Example: SQL server houses 100 physical sites data, app is at site X today, just load site X data.
Am I on the right track or did a I miss an obvious solution?
Related
For every row which represents clients data (name, phone etc) need to save also 3 images. Is it better saving images to ftp or in sql db?
Images will be shown in bootstrap carousel.
(I'll use asp5-mvc6 with ms sql db)
I would say that if you have a very good infraestructure, save in the DB is better, and here is why I think that:
You can access it in the same way that your other data
No extra setups for serving the media
If you have multiple servers, your sync is done with the DB
But if you have a small app, with small server, putting the media in a folder and keep in the db a reference to the file is not the end of world, but if you have more than one server, remember you will have to replicate the file in the other servers as well.
I would say you should try both ways before making a decision.
There is a situation in my company where we are developing a light weight .net web application with least dependencies. Application will be used hosted on client server. However there will not be any internet connection and they will use application locally.
We do not want any type of database installation on client machine. We want to keep it as simple as possible on client side. for this purpose we want to save/access data from file, as data on client side will not exceed more than 100 000 rows. We are also concerned about the speed for accessing data.
Here I want to ask how the data should be saved in file so that it can be accessed fast? What file format should be?
Whether I can use any db file which does not require any database installation on client side.
You could save all data to a json file, this will become increasingly slow and prone to corruption.
Also, have a look at SqlLite.
You can try Sql Compact Edition or SqlLite. Both are file based solution and fit as per your need.
Advantage of using these two would be that you can perform almost all the database queries on it and the data retrieval will be very fast. Also the you can think of optimizing the data storage and create tables etc.
You can use SQLite which is heavily uses in such scenarios (among others used by Chrome and Firefox). It is even public domain, so no license costs etc.
I am about to create an ASP.NET MVC app which will have over 2000 products and each products will approx 20 photos. The app will be asp.net mvc app and
I am using sql server 2008 r2 to manage my data. which way is the better approcah here;
Uploading pictures to a path and
storing their file names to database
in order to be able to make a
relation to each other.
Storing pictures inside the database
as byte as well and retreive them
from there when needded.
definitely in the filesystem (store path) is better, i have done both in the past.
Against SQL server to store images
A) betting data in and out can be more difficult as have to used blob type objects and some ORMs don't really cater for this
B) your data base is much bigger so effects your backup/restore policy. The more frequently you backup the better but space will be increased. Storing in file, yep you still need to backup but backing up filesystem is easy.
C) when you run out of storage space you just add another NAS drive / server and start storing images there, so scales horizontally
The common perception is not as good as data stored in two places but for me its better as the type of data in stored in the best storage medium for the data types ..
Definitely storing as a path rather than the byte array. This means you can easily change the actual image itself without having to alter any code or muck around in SQL (as long s the new file has the same name as the old one).
Hope this helps.
In the database using FILESTREAM which combines the 2 ideas (file and database)
FILESTREAM integrates the SQL Server Database Engine with an NTFS file system by storing varbinary(max) binary large object (BLOB) data as files on the file system. Transact-SQL statements can insert, update, query, search, and back up FILESTREAM data. Win32 file system interfaces provide streaming access to the data.
This changes the file vs database arguments
If you want to store paths only, then you'll have to accept the fact that images and database will get out of synch over time.
how big (binary(xy)) should I make my table column in SQL database if I want to store there pictures taken by camera - that means variable size up to.. I don't know.. 7MB? But if I should rather limit the size up to 2MB or something, I would. Whats your opinion?
EDIT
Or where else should I store them? I am building a web gallery using asp.net mvc.
What you're talking about is a varbinary column. Of course, if you make varbinary greater than 8000, it immediately converts it into a varbinary(max) column, meaning it can store up to 2GB. This has to do with how SQL Server stores rows (8k per page).
Therefore, each row stores the column as a pointer to the bits, anyway. So, what I would do, if I were me, would be to store the images on the file system, and store the location of those files inside the database.
If you want to store images in SQL Server then use the varbinary(max) column type. It permits up to 2Gb (if I recall).
Also, as you are using SQL Server 2008 (I don't know about the express edition tho') you could use the new filestream data type.
Of course the big advantage of storing this in the database is that you only have one thing to back up and you don't have issues with the file system and database getting out of sync. The new filestream type is an interesting development because it can help alleviate these problems.
The disadvantage of storing this data in the database is that you put additional strain on the database, especially if the bandwidth between your database and webserver is already strained.
As others have already stated in comments (which BTW you guys should have been answers despite the pendantic police) you really have to have some killer reason to store images in the database. Otherwise just place them in the file system.
Especially this is true in the case where the images are delivered from a web server. The web server is way more effecient at delivering images from the file system than your code will be extracting them from a database.
I am developing an Adobe AIR application which stores data locally using a SQLite database.
At any time, I want the end user to synchronize his/her local data to a central MySQL database.
Any tips, advice for getting this right?
Performance and stability is the key (besides security ;))
I can think of a couple of ways:
Periodically, Dump your MySQL database and create a new SQLite database from the dump. You can then serve the SQLite database (SQLite databases are contained in a single file) for your users client to download and replace the current database.
Create a diff script that generates the necessary statements to bring the current database up to speed (various INSERT, UPDATE and DELETE statements). To do this, you must record the time of each change continuously in your database (the time of creation and update for each row, and keep a history of deleted rows).
User's client will download the diff file (a text file of the various statements) and apply it on the local database.
Both approaches have their own pros and cons - by dumping the entire database, you make sure all the data gets through. It is also much easier than creating the diff, however it might put more load on the server, depending on how often does the database gets updated between dumps.
On the other hand, diffing between the database will give you just the data that changed (hopefully), but it is more open to logical errors. It will incur an additional overhead on the client as well, since it will have to create/update all the necessary records instead of just copying a file.
If you're just sync'ing from the server to client, Eran's solution should work.
If you're just sync'ing from the client to the server, just reverse it.
If you're sync'ing both ways, have fun. You'll at minimum probably want to keep change logs, and you'll need to figure out how to deal with conflicts.