Is saving thumbnail preview as base64 in the local DB a good idea? - react-native

I'm new to RN and I want to create something similar to the phone's native photo app. I'm developing a grid preview of all of a user's photos in a Flatlist. We're talking tens of thousands of photos. I'm saving the user's photo information in a local DB and was wondering if saving the thumbnail as base64 to boost performance is a good idea or not. What would be the standard practice in this scenario?
Edit: I'm using Expo's MediaLibrary.getAssetsAsync() with pagination, it's not very performant. If I only request a few dozen photos it loads fast but you need more loads as you scroll your photos, if I load many at once, I can scroll faster but I have wait longer for each load. I even tried a quick load with a second larger load to trick the user into a feel of smooth scrolling. Wouldn't storing the results in a local DB make things much faster?

Data is currently stored locally, adding a local database here is i think useless and adds complexity to your architecture
If you're building a rich-content app and get picture from the net, you can opt for cache and thumbnail solution
After all depends on how you want to present your application, if a preview of the image is sufficient before loading the large format image.
For my part I would prefer the cache and the thumbnail only when information passes through the internet, in the case of local data I set up a progressive loading of the images by detecting the images of the area to be loaded otherwise no need to load the others .

Related

coldfusion MSFT SQL Image Store

I am using ColdFusion 10 and working on new project where the user will be allowed to upload pictures from events. Never worked with user uploaded images before. How do I store the image in MSFT SQL? Is there a best practice when it comes to users uploading huge 10 MEG pictures? Is there a way to control or automatically compress pictures?
Thanks!
This is a two part question:
Part 1:
First part will be your data store and pull. Where you will use in your cfquery, cfqueryparam that will look like this:
INSERT into EMPLOYEES (FirstName,LastName,Photo)
VALUES ('Aiden','Quinn',<cfqueryparam value="#ImageGetBlob(myImage)#" cfsqltype='cf_sql_blob'>)
To select then reconstruct you will use this:
<cfset myImage = ImageNew(#GetBLOBs.PHOTO#)>
Where you can then do this:
<img src='#myImage#>
Above examples pulled from the docs.
Get familiar with <cfimage> and cfscript version image() for editing (rotating, scaling, etc.)
Part 2:
The other part to your question has to do with upload limits.
Coldfuion has limits that can be changed in CFIDE or RAILO equivalent. There is also limiters set in your web service like apache and IIS, you will have to look into this to change it.
BUT if you are only concerned about 10 mb size images you will be fine. It is when you get into hundreds of MB size files that will cause you headaches.
Remember on your form to set your form enctype to this because you will have to upload your file to your server before you can work with it:
<form action="workPage.cfm" method="post" enctype="multipart/form-data">
Also you will have to access that file using <cffile>
I think all this is enough to get you started.
If you are concerned about the storage size of an image, and this is of course a reasonable concern, then you could use to scale the image down to the maximum dimensions and quality or compression level (if stored as a jpeg) that you need within your application.
Storing your images within a database allows them to be more easily portable, across a cluster for instance. If that is not a concern, then what I tend to do is generate a unique name for each image uploaded, rename them, and store the unique name in the database rather than the image itself.

Reduce image size of multiple images via ssh

I have an e-commerce website I made for a client.
As with any e-commerce site, there are a lot of pictures.
About a hundred of these pictures were uploaded by me, provided by my client.
The other 400 were uploaded by client.
The problem is that the first set of images that my client provided me with were about 100kb each, which is not such a big deal. The second set of images, the ones my client uploaded, were about 5-9 MBs in size. Obviously I didn't see this until it was too late.
So my question is this: How can I reduce the image size of all those load-heavy images to something more around 100-200kb through ssh/commandline/php.
I'm also talking about re-scaling the images to something smaller (currently they are about 3700px x 5600px).
Please note: I don't need a solution to re-scale the images when they are being uploaded.
I need a solution to re-scale the images that are already on the server.
Assuming your server is a Unix, you can use imagemagick/convert tool:
http://doc.ubuntu-fr.org/imagemagick
You can also use PHP+GD, see:
http://fr.php.net/manual/en/book.image.php

Download large amount of images with objective-C

I'm currently developing an order entry application for my company. This means I need to download approximately 1900 product images to the iPad, and that's just the normal images. I also need to download an equal amount of thumbnails. The reason for downloading the images to the iPad instead of just displaying them from a given URL is that our reps wander into large stores which often don't have stable internet connections.
What would be the best course of action? The images are stored on our servers, but you need to be authenticated using Basic Auth before you can access those. I have thought of just downloading them one-by-one, which is tedious, or group them together on the server as a zip-file but that would be a large file.
A one-by-one is a valid options for the download. I have done projects with similar specs, so what I advise:
Use some 3rd party library to help you with the download of the images. MKNetworkKit for example. If you feel confortable enough, NSURLConnection is more than enough.
Store the images in the application sandbox.
Instead of downloading the thumbs, just create them on the go when you need them (Lazy pattern). Unless your image's thumbs are somewhat different than the original (some special effect).

Processing large video feed on the iPad

I need to take UIImages that are being fed in a video stream, all of this is on the iPad with limited memory, save them to the file system quickly while the stream is still feeding, then process them after a "recording" session. I need to save the UIImages coming in quickly to avoid interrupting the feed which will still be viewing on the iPad. I'm thinking of saving each frame to a separate file then afterward reading these files sequentially and combining them into a .mov file.
The tricks are: how to save the UIImages quickly, maybe raw data, then when processing the movie, append each UIImage file to it to make a seamless movie file? I will need to do some processing of each frame like scaling and transforms before appending.
Any advice would be greatly appreciated.
Depending on how big your images are, you could let the new coredata "use external storage" attribute do this for you.
Here is the explanation what it does copied from another answer of mine:
Since we are on IO5 now, you no longer need to write images to disk neccessarily.
You are now able to set "allow external storage" on an coredata binary attribute. According to apples release notes it means the following:
Small data values like image thumbnails may be efficiently stored in a
database, but large photos or other media are best handled directly by
the file system. You can now specify that the value of a managed
object attribute may be stored as an external record - see
setAllowsExternalBinaryDataStorage: When enabled, Core Data
heuristically decides on a per-value basis if it should save the data
directly in the database or store a URI to a separate file which it
manages for you. You cannot query based on the contents of a binary
data property if you use this option.
There are several advantages using this approach.
First coredate is saving the files at least as fast as you could when writing to the file system. But if there are any small images which apply to the conditions described above, it'll be much faster because they will be saved directly in the coredata sqlite file.
Further with iOS 5 it is very easy possible to work on separate managed contexts and perform changes on a child context in background. If finished successfully you can merge this child context into your main managed object context and do the processing you need.
[child performBlock:^{
[childsave:&parentError]; //do this in background on child context
}];
There is a NSPrivateQueueConcurrentType for creating "child-moc" - see [apple documentation][1]
And at least you can work with coredata objects which enables you to cache, limit and optimize further processing after your download completed
[1]: https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/CoreData/Articles/cdConcurrency.html#//apple_ref/doc/uid/TP40003385 for more info

iPad - how should I distribute offline web content for use by a UIWebView in application?

I'm building an application that needs to download web content for offline viewing on an iPad. At present I'm loading some web content from the web for test purposes and displaying this with a UIWebView. Implementing that was simple enough. Now I need to make some modifications to support offline content. Eventually that offline content would be downloaded in user selectable bundles.
As I see it I have a number of options but I may have missed some:
Pack content in a ZIP (or other archive) file and unpack the content when it is downloaded to the iPad.
Put the content in a SQLite database. This seems to require some 3rd party libs like FMDB.
Use Core Data. From what I understand this supports a number of storage formats including SQLite.
Use the filesystem and download each required file individually. OK, not really a bundle but maybe this is the best option?
Considerations/Questions:
What are the storage limitations and performance limitations for each of these methods? And is there an overall storage limit per iPad app?
If I'm going to have the user navigate through the downloaded content, what option is easier to code up?
It would seem like spinning up a local web server would be one of the most efficient ways to handle the runtime aspects of displaying the content. Are there any open source examples of this which load from a bundle like options 1-3?
The other side of this is the content creation and it seems like zipping up the content (option 1) is the simplest from this angle. The other options would appear to require creation of tools to support the content creator.
If you have the control over the content, I'd recommend a mix of both the first and the third option. If the content is created by you (like levels, etc) then simply store it on the server, download a zip and store it locally. Use CoreData to store an Index about the things you've downloaded, like the path of the folder it's stored in and it's name/origin/etc, but not the raw data. Databases are not thought to hold massive amounts of raw content, rather to hold structured data. And even if they can -- I'd not do so.
For your considerations:
Disk space is the only limit I know on the iPad. However, databases tend to get slower if they grow too large. If you barely scan though the data, use the file system directly -- may prove faster and cheaper.
The index in CoreData could store all relevant data. You will have very easy and very quick access. Opening a content will load it from the file system, which is quick, cheap and doesn't strain the index.
Why would you do so? Redirect your WebView to a file:// URL will have the same effect, won't it?
Should be answered by now.
If you don't have control then use the same as above but download each file separately, as suggested in option four. after unzipping both cases are basically the same.
Please get back if you have questions.
You could create a xml file for each bundle, containing the path to each file in the bundle, place it in a folder common to each bundle. When downloading, download and parse the xml first and download each ressource one by one. This will spare you the overhead of zipping and unzipping the content. Create a folder for each bundle locally and recreate the folder structure of the bundle there. This way the content will work online and offline without changes.
With a little effort, you could even keep track of file versions by including version numbers in the xml file for each ressource, so if your content has been partially updated only the files with changed version numbers have to be downloaded again.