Reading and writing images to an SQLite DB for iPhone use - objective-c

I've set up a SQLite DB that currently reads and writes NSStrings perfectly. I also want to store an image in the database and recall it later. I've read up a bit on using NSData and encoding the image, but I'm not entirely sure what the syntax is for what I want to do. Any code snippets or examples would be greatly appreciated.
My current process goes like this:
UIImagePickerController -> User Chooses Image from Photos -> chosenImage is set to instance of UIImageView -> Now I want to take this image and store it in the DB
I should mention this call will eventually be replaced with a call to a remote server. Not sure if this makes a difference as far as performance goes.

You'll need to convert the UIImage hosted within your UIImageView into a binary BLOB for storage in SQLite. To do that, you can use the following:
NSData *dataForImage = UIImagePNGRepresentation(cachedImage);
sqlite3_bind_blob(yourSavingSQLStatement, 2, [dataForImage bytes], [dataForImage length], SQLITE_TRANSIENT);
This will generate a PNG representation of your image, store it in an NSData instance, and then bind the bytes from the NSData as a BLOB for the second argument in your SQL query. Use UIImageJPEGRepresentation in the above to store in that format, if you like. You will need to have a BLOB column added to the appropriate table in your SQLite database.
To retrieve this image, you can use the following:
NSData *dataForCachedImage = [[NSData alloc] initWithBytes:sqlite3_column_blob(yourLoadingSQLStatement, 2) length: sqlite3_column_bytes(yourLoadingSQLStatement, 2)];
self.cachedImage = [UIImage imageWithData:dataForCachedImage];
[dataForCachedImage release];

One option (and generally preferred when working in SQL) is to write the image to a file on the system and store the path (or some other kind of identifier) in the database.

Apple's recommendation is not to store BLOB's in SQLite databases that are bigger than ~2 kilobytes.
SQLite organizes databases into pages. Each page is 4 kilobytes in size. When you read data from the SQLite database file it loads these pages into an internal page cache. On the iPhone I think this cache defaults to 1 megabyte in size. This makes reading adjacent records very fast because they will probably be in the page cache already.
When SQLite reads your database record into memory it reads the entire record and all of the pages that it occupies. So if your record contains a BLOB, it could occupy many pages and you will be ejecting existing pages from the cache and replacing them with your BLOB record's pages.
This isn't so bad if you're just scanning through and loading all your BLOBS to do something with them (display them for example). But if say you did a query where you just wanted to get some data that is in the same row as the BLOB this query would be much slower than if the record did not contain the large BLOB.
So at a minimum you should store your BLOB data in a separate table. Eg:
CREATE TABLE blobs ( id INTEGER PRIMARY KEY, data BLOB );
CREATE TABLE photos ( id INTEGER PRIMARY KEY, name TEXT, blob_id INTEGER,
FOREIGN KEY(blob_id) REFERENCES blobs(id) );
Or better yet, store the BLOB data as files outside of the SQLite database.
Note that it may be possible to tweak the page cache size with SQL PRAGMA statements (if you're not using CoreData).

Writing Image to SQLite DB
if(myImage != nil){
NSData *imgData = UIImagePNGRepresentation(myImage);
sqlite3_bind_blob(update_stmtement, 6, [imgData bytes], [imgData length], NULL);
}
else {
sqlite3_bind_blob(update_stmtement, 6, nil, -1, NULL);
}
Reading From SQLite DB:
NSData *data = [[NSData alloc] initWithBytes:sqlite3_column_blob(init_statement, 6) length:sqlite3_column_bytes(init_statement, 6)];
if(data == nil)
NSLog(#"No image found.");
else
self.pictureImage = [UIImage imageWithData:data];

you should first write image on file system. and then take the image path and store that image path(URL) as TEXT in sqlite.

The best practice is to compress the image and store it in the database or file system. If you don't care about the resolution of the image you can go ahead and even resize the image by using :
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
After that you can use UIImageJPEGRepresentation for jpeg images with a value of "0" for maximum compression. Or you can use UIImagePNGRepresentation for png images

Related

Does NSMutableDictionary now truncate data as a string or add ellipses? It seems to, here

Does NSMutableDictionary now truncate data as a string, or return ellipses for long data? I use this feature to save a plist with different colors in it. This has worked fine (with minor modifications) since around 2005.
But last month, I think after an OS update, I noticed all of my data was starting to get corrupted. I've narrowed it down to this. When I run this code...
NSMutableDictionary *dict = [NSMutableDictionary dictionary];
NSError *error = nil;
[dict setObject:[NSKeyedArchiver archivedDataWithRootObject:[NSColor redColor] requiringSecureCoding:NO error:&error] forKey:#"backdropColor"];
NSString *test = [dict description];
Note that before MacOS 10.13, you can use this code, which has the same bug.
[dict setObject:[NSArchiver archivedDataWithRootObject: [NSColor redColor]] forKey:#"backdropColor"];
When I run either, I get the following result:
backdropColor = {length = 3576, bytes = 0x62706c69 73743030 d4010203 04050607 ... 00000000 00000d88 };
See the ... ? That's not supposed to be there. It used to fill in that ... with all of the data.
I can't find any documentation that explains a change, and while this code has remained unchanged for years, it's now corrupted months of work for one of my users already.
Turning some of our comments into an answer:
-[NSObject description] is not meant to be a general-purpose parsing/serialization format, and over time, the descriptions of objects may change. In macOS Catalina, the description for NSData changed to truncate contents in the middle to avoid full display of enormous data blobs.
Currently, I throw a ton of objects into NSData and then export that using the description to a plist, which can then be easily parsed back into an NSData object later. That's all it needs to do, correct dump an NSData object out and read it back in.
Based on your minimal requirements, the simplest resolution for your problem is simply storing NSData objects directly in your plist, instead of their -descriptions. The plist format natively supports binary data, and all Foundation tools (like NSPropertyListSerialization) will accept NSData instances directly for writing out to disk.
If you would like to explicitly convert your binary data into a safely round-trippable string, consider converting it to a base64-encoded string using -[NSData base64EncodedStringWithOptions:], storing the string in the plist, and retrieving later with -[NSData initWithBase64EncodedString:options:].
However, if you require backwards compatibility with the old format (e.g. versions of your app running on macOS Catalina and newer must be able to save files readable on older versions of macOS and your app), you will need to write your own method for replicating the format.

Why is TIFFRepresentation bigger than the actual image file?

Here's the code I use:
NSImage *image = [[NSBundle bundleForClass:[self class]] imageForResource:#"test.jpg"];
NSData *originalData = [image TIFFRepresentationUsingCompression:NSTIFFCompressionJPEG factor:1.];
The originalData.length gives me 1802224 bytes (1.7MB). While the image size on the disk is 457KB. Why does TIFFRepresentation get larger and which representation I should use in order to get original image (i want to transfer image over the network)?
TIFF is much bigger than JPEG. No surprise there.
However, the answer to your actual question is: don't use any "representation". You have the data (namely, the file itself); send it! Don't turn the image file into an NSImage; grab it as an NSData.

Suggestions to Process Complex + Large Data File

I have a very large and complex data file (.txt, see snippet below) of about 10MB and would like to know the best way to store it and access it later on.
My app currently uses core data for storage of other entities but I don't see how I can create an entity from this type of data file because of its complexity.
This file is divided as follows:
First line of each major section begins with an A| and means a new 'airway' to be defined. Then, is it's name, so in the example below we have the airway named V320 and another named V321. On the following lines, we have important data, the 'points'/waypoints which make up this airway. Each one has a name, and coordinates. So the first one here is PLN at 45.63N and -84.66W (coordinates). Then, from there the next one is LORIW at 45.35N and -84.92W, from LORIW we go to IROTO, and so on...
NOTE: There may be two, three, maybe even 4 airways with the same 'name' like V320 for example has 3...but each one is in it's own part of the map.
The other values there are irrelevant such as the numbers after the coordinate pair.
In essence, I need all this so that I can then draw lines on my map (GMSPolyLine using Google map SDK) which goes through all these points for each airway and then to create GMSMarkers(google version of MKAnnotation) for each waypoint which the user can tap.
I can handle the drawing of lines/markers on the map but the difficult part for me to visualize is the manipulation of this data and making it easier to access.
Let me know if you have any questions.
A|V320|20
S|PLN|045630647|-0084664108|LORIW|045352072|-0084924214|0|219|1998
S|LORIW|045352072|-0084924214|IROTO|045188989|-0085075111|219|219|1168
S|IROTO|045188989|-0085075111|ADENO|045030644|-0085220425|219|219|1132
S|ADENO|045030644|-0085220425|TIDDU|044877978|-0085359767|215|215|1090
S|TIDDU|044877978|-0085359767|SKIPR|044831714|-0085401772|215|215|330
.....
A|V321|29
S|PZD|031655206|-0084293100|KUTVE|031866950|-0084451303|0|329|1505
S|KUTVE|031866950|-0084451303|DUVAT|031948772|-0084512695|329|329|582
S|DUVAT|031948772|-0084512695|LUMPP|032041158|-0084582139|329|329|657
S|LUMPP|032041158|-0084582139|PREST|032176375|-0084684117|329|329|963
S|PREST|032176375|-0084684117|CSG|032615253|-0085017631|326|326|3129
S|CSG|032615253|-0085017631|JALVO|032722436|-0085064033|326|339|684
.....
Your data exhibits some regularity. If it is predictable and consistent, just write a parser that iterates through the file and creates appropriate Core Data entities.
For example, the fact that each new airway is separated by a newline can help you find those. Also, each final waypoint is repeated in the next line unless you are at the end of an airway record. I think you can do this in maybe 20-30 lines of code.
On your development machine (or even on an iPad or recent iPhone, for that matter), even creating a 10MB array in memory (to be parsed) should not be a constraint.
If the data is static, you can use the resulting sqlite database as a read-only persistent store that you can include in your app bundle.
As for the parser, it would be something like this:
NSString *file = [[NSString alloc] initWithContentsOfFile:fileURLString
encoding:NSUTF8StringEncoding error:nil];
NSArray *lines = [file componentsSeparatedByString:#"\n"];
for (NSString *line in lines) {
if (line.length < 1) { continue; }
NSArray *fields = [line componentsSeparatedByString:#"|"];
if ([fields.firstObject isEqualToString:#"A"]) {
// insert new airway object and populate with other fields
}
else if ([fields.firstObject isEqualToString:#"S"]) {
// insert new waypoint object (two for each first line)
// assign as relationship to the current airway
// and to another waypoint as necessary
}
}
[managedObjectContext save:nil];

Create thumbnail for large .psd and .tiff files in cocoa (objective-c)

I am developing an OSX application dealing with very large images (500 MB to + 1.0 GB). My application needs to load the images (.psd & .tif) and allows the user to sort the images, rate the images and etc.
I would like to load a small thumbnail of the image. So here is what I am struggling with:
I have tried to generate the thumbnail in three different ways and the fastest has been about 17 second for generating each thumbnail.
I am looking for recommendations on how to decrease the thumbnail generation time. Do you guys know a library that I could use? Maybe another way to speed this up.
Attempts:
I used CGImage's thumbnail generation method CGImageSourceCreateThumbnailAtIndex to generate the image.
I used embedding AppleScript in my Cocoa and used the following command do shell script (\"/usr/bin/qlmanage -t -s640 \" & quoted form of file_Path & space & \" -o \" & quoted form of save_path
I used grabbing the thumbnail from image preview using QLThumbnailImageCreate
both 1 & 2 been around 17 second generation time for each image. 3 returns the blank image. I think it has to do with the fact that preview needs to load it.
I also tried using GCD (Grand Central dispatch) to speed things up too, however it seems due to the disk read bottleneck, the processes are always in serial and do not get executed in paralel. So multi-threading using different queues didn't help (used dispatch_async).
Its worth mentioning that all these images exist on an external hard drive which my application will be reading. The idea is to do this without needing to move the files.
Again i am using Objective-C and developing this of OSX 10.8. I am hoping maybe there is a C++ / C library or something faster than the three options I found my self.
Any help is greatly appreciate it.
Thank you.
If there is a preview/thumbnail embedded in the file, you can try to create the CGImageSource with a mapped file (so only the bytes that are really necessary for generating the thumbnail will be read from the disk):
NSURL* inURL = ... // The URL for your PSD or TIFF file
// Create an NSData object that copies only the required bytes to memory:
NSDataReadingOptions dataReadingOptions = NSDataReadingMappedIfSafe;
NSData* data = [[NSData alloc] initWithContentsOfURL:inURL options:dataReadingOptions error:nil];
// Create a CGImageSourceRef that do not cache the decompressed result:
NSDictionary* sourcOptions =
#{(id)kCGImageSourceShouldCache: (id)kCFBooleanFalse,
(id)kCGImageSourceTypeIdentifierHint: (id)typeName
};
CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef)data,(CFDictionaryRef)sourceOptions);
// Create a thumbnail without caching the decompressed result:
NSDictionary* thumbOptions = #{(id)kCGImageSourceShouldCache: (id)kCFBooleanFalse,
(id)kCGImageSourceCreateThumbnailWithTransform: (id)kCFBooleanTrue,
(id)kCGImageSourceCreateThumbnailFromImageIfAbsent: (id)kCFBooleanTrue,
(id)kCGImageSourceCreateThumbnailFromImageAlways: (id)kCFBooleanFalse,
(id)kCGImageSourceThumbnailMaxPixelSize:[NSNumber numberWithInteger:kIMBMaxThumbnailSize]};
result = CGImageSourceCreateThumbnailAtIndex(source,0,(CFDictionaryRef)options);
// Clean up
CFRelease(source);
[data release];
// Return the result (autoreleased):
return [NSMakeCollectable(result) autorelease];

How to serialize NSImage to an sql script in order to import it in a blob column?

My iphone app uses a SQLite database to store data. I have a table with a blob column where i store my images.
When i do an update i don't want to overwrite the user's database, i want to execute some sql scripts and inject new data if needed.
I have a utility app made for Mac that should make the sql scripts that will be run on the iphone.
I have the images stored as NSImages in my app but i have problems when i want to export the data as sql scripts(simple text files).
My files should have lines like:
Insert into Images(imageData) values ( ___IMAGE1_DATA___ );
Insert into Images(imageData) values ( ___IMAGE2_DATA___ );
Insert into Images(imageData) values ( ___IMAGE3_DATA___ );
the question is how could i serialize images data to my sql script in order to import the data correctly into the blob column?
You can use TIFFRepresentation of the NSImage to get hold of the NSData representation.
Actually I would save the images to disk and reference those only from your sqlite database. This can improve performance when your images tend to be large.
You will need to get the underlying CGImage object, using the CGImage method of UIImage. Then take a look at the CGImage reference:
http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Reference/CGImage/Reference/reference.html
Then use a CGContextRef object to draw the image, and obtain the pixel data.
There's a good example on StackOverflow:
How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?
You can also use the UIImagePNGRepresentationor UIImagePNGRepresentation functions, that return directly a NSData object, if those formats suits you. It will be a lot simpler.
Then use the bytesmethod of NSData to have a pointer to the image data (const void *).
I found a solution to serialize the images to an sql script and then insert them into the db (blob column).
I extract the NSData from an image like this NSData *thumbailData = [thumbnail TIFFRepresentation]; (thanks Nick).
After i extract the NSData, i convert it into a hex string using the method below. I added it to a category of NSData.
- (NSString*) hexString {
NSMutableString *stringBuffer = [NSMutableString
stringWithCapacity:([self length] * 2)];
const unsigned char *dataBuffer = [self bytes];
int i;
for (i = 0; i < [self length]; ++i)
[stringBuffer appendFormat:#"%02x", (unsigned long)dataBuffer[ i ]];
return [[stringBuffer copy] autorelease];
}
NSString *hexRepresentation = [thumbnailData hexString];
The hexRepresentation will look like below:
4d4d002a00005a48fafafafff8f8f8fff8f8f8fff9f9f9fff8f8f8fff8f8f8
…
In order to serialize the hexRepresentation of the image i created an SQL script like below:
INSERT INTO Thumbnails (Picture_uid, Thumbnail) Values(10, x'4d4d002a00005a48fafafafff8f8f8fff8f8f8fff9f9f9fff8f8f8fff8f8f8 … ‘) ;
the x' data ' tells the db that it will receive info in hex format and it will know how to deal with it.
one of the problems with this solution is that it will double the size of the script. if you'll have an image of 200kb the script will have 400kb but in the db the image will be 200kb.
for me this was a good solution to update my db using sql scripts without writing any code.