Not able to create frame using bitmap of image view? - bitmapfactory

I am implementing the google textrecognizer which only detects frames. I am trying to build the frame using bitmap of my image in imageview, it doesnt work. But if I build frame using bitmap of drawable folder image, then it works. How can I convert the bitmap of image in a format acceptable by frame builder which textrecognize can detect.

Related

Drawing Retina resolution on CGDisplayGetDrawingContext

I have a situation were I'm trying to draw an image into a display's CGContext retrieved using CGDisplayGetDrawingContext. Despite having the image at the correct high resolution, using CGContextDrawImage to draw the image onto the context results in a pixilated image. I've also tried scaling down the image in a bitmap context (using CGBitmapContextCreate) then drawing that one onto the display's context, however that also results in a pixilated image (I thought it may retain the DIP, was a long shot). Any idea how to fix this?.

Is there an Android equivalent of iOS' UIViewContentModeScaleAspectFit;

I've got an image loaded from Parse.com and on xhdpi and xxhdpi devices, the image displayed is very tiny. I've tried playing with the XML layout. The most I can do is to stretch a background image that is the border for the downloaded image to full width of the screen. The height of the image never exceeds the physical size of the image stored on parse.
I'm trying to get the image to scale to fit the width of any device its on, while maintaining the aspect ratio.
I'm using a ParseImageView which is a subclass of ImageView. I accomplished this on iOS by resizing the image to fill the width, and then setting ImageView.contentMode=UIViewContentModeScaleAspectFit; Is there any equivilant to this for Android?
You can achieve this by using Picasso Library. Using Picasso Library, you can get the width and height of the image from Web by casting it to Bitmap and scaling the ImageView to that aspect ratio. Related post is below:
getting image width and height with picasso library

Increase distance between pixels in the image

I need to enlarge an image but without resizing its pixels.
So if primal image is just a black square, then new image should be something like it:
How could I do that?
You could look into the Core Image "Dot Screen" filter or other filters in the "halftone" category of core image.
If you can't find something there, you might have to build it yourself by manipulating the bitmap contents directly.

Titanium Images Wrong Size

I have a Titanium app where I am using the same ImageView for alot of different images. I am changing the image in the image view by setting it's image property.
The problem is some of the images are not showing at their full size sometimes, it is kinda random, but sometimes they show full size sometimes not.
I have width and height set to "auto" on the image view.
Anyone come across this issue.
A potentially more reliable way would be too dynamically resize the image before handing it off to the ImageView by using Titanium.Blob.imageAsReized.
Try to integrate this in your code, first you have to load the image as a blob though.
// Get the blob of your image first, then call this method
imageView.image = imageBlob.imageAsResized(newWidth, newHeight);

iOS objective C converting coordinates from Absolute image location to my views coordinate system

I am using CIDetector to find faces in a picture.
The coordinates of faces it returns are the absolute coordinates in the image file (The image dimensions are much larger than the screen size obviously).
I tried to use the converRect:toView command. The image itself is not a UIView so the command doesn't work, also I have a few views embedded inside each other where finally the image is being shown.
I want to convert the bounds of the found faces in the image to the exact location of the face being shown on the screen in the embedded image.
How can this be accomplished?
Thanks!
The image being shown on the phone - the image is scaled to fit the screen with aspect fit
The coordinates from CIDetecter (CoreImage) are flipped relative to UIKit coordinates. There are a bunch of tutorials out there on iOS Face Detection but most of them are either incomplete or mess up the coordinates. Here's one that is correct: http://nacho4d-nacho4d.blogspot.com/2012/03/coreimage-and-uikit-coordinates.html
One thing to note: the tutorial uses a small image so the resulting coordinates do not have to be scaled to the on-screen (UIImageView) representation of the image. Assuming you use a photo taken with the iPad camera, you will have to scale the coordinates by the amount the source image is scaled (unless you reduce its size before running the face detection routine -maybe not a bad idea). You may also need to rotate the image for the correct orientation.
There is a routine in one of the answers here for rotating/scaling: UIImagePickerController camera preview is portrait in landscape app
And this answer has a good routine for finding the scale of an image when presented by a UIImageView using 'aspect fit': How to get the size of a scaled UIImage in UIImageView?
You will need to use the scale in order to map the CIDetector coordinates from the full size image to the scaled down image shown in a UIImageView.