I can print a nice perfect Report on iPad, but now we want to port our app to iPhones, too.
All normal views are being printed as they have to, except the one view where we use ShinobiCharts (OpenGL): on iPhone only the screensize will be printed, the rest of the PDF-Sheet will remain white.
I tried putting it into a scrollview and programatically assign the right resolution to the view before it was printed, but this resulted in the small window being streched to fit the PDF-Sheet's size, still not displaying the whole diagrams.
Normal views:
UIGraphicsBeginPDFPage();
[pdf1.view.layer renderInContext:pdfContext];
Diagram view:
UIGraphicsBeginPDFPage();
UIGraphicsBeginImageContextWithOptions(pdf8.view.bounds.size, NO, 0.0);
[pdf8.view drawViewHierarchyInRect:pdf8.view.bounds afterScreenUpdates:YES];
UIImage *pdf8Image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *pdf8ImageView = [[UIImageView alloc] initWithImage:pdf8Image];
[pdf8ImageView.layer renderInContext:pdfContext];
Full image on iPad
Cropped image without ScrollView
Scaled image with ScrollView
Related
Through Face Detection, I want to blur eyes and mouth of a person. So I have a imageView that contains 3 subviews (2 per eye and the mouth). Each one of these subviews were masked with a PNG shape (with background clear) for avoiding to show rectangle.
My imageView in screen remain so: http://screencast.com/t/ak4SkNXM0I
And I want to obtain the image for storing in another place, so I've tried this:
CGSize size = [imageView bounds].size;
UIGraphicsBeginImageContext(size);
[[imageView layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But finalImage is an image like this:
http://screencast.com/t/eDlvGqqY
My subViews (eyes and mouth) are not masked as above.
Any idea?
Thanks.
Edit:
I have to use library compatible with ios6
You can check the new API added to iOS7. Try one of the following methods:
snapshotViewAfterScreenUpdates:
resizableSnapshotViewFromRect:afterScreenUpdates:withCapInsets: for resizable image
drawViewHierarchyInRect:afterScreenUpdates:
In the application I'm working at you can take a picture with the iPad camera. After that using CoreGraphics you can draw shapes on that image.
At first the image was upside down and mirrored. I resolved that with this:
CGContextTranslateCTM(myContext, 0, backgroundImage.size.height);
CGContextScaleCTM(myContext, 1.0, -1.0);
But now when you take the image in portrait mode, the imported image is rotated to the left (so it's presented horizontally). I rotated the image back with this code:
UIImage *tempImage = [[UIImage alloc] initWithCGImage:imagetest.CGImage];
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, 0, tempImage.size.height);
transform = CGAffineTransformRotate(transform, -M_PI_2);
CGContextRef ctx = CGBitmapContextCreate(NULL, tempImage.size.width, tempImage.size.height,
CGImageGetBitsPerComponent(tempImage.CGImage), 0,
CGImageGetColorSpace(tempImage.CGImage),
CGImageGetBitmapInfo(tempImage.CGImage));
CGContextConcatCTM(ctx, transform);
CGContextDrawImage(ctx, CGRectMake(0,0,tempImage.size.height,tempImage.size.width), tempImage.CGImage);
CGImageRef cgimg = CGBitmapContextCreateImage(ctx);
UIImage *img = [UIImage imageWithCGImage:cgimg];
CGContextRelease(ctx);
CGImageRelease(cgimg);
Now the image is shown in the right way (portrait), but I can't draw properly on it, maybe because the width and height are reversed.
From what I read there is a meta tag with the image orientation that cannot be read by CoreGraphics.
Do you know a better way to rotate the image? Or any solution that would keep the image from rotating when taking a photo in portrait mode?
Yes that is an issue because default orientation of device camera is Landscape, if you take picture in portrait mode and see preview in Photo Gallery it will be fine, but as you use it in your app it will be rotated 90 Degrees, to fix that issue i have written answer in my Recent Post Here
If you tell the image to draw itself, it will respect its own orientation. No need to flip it (it does that itself) and no need to rotate it.
I am creating a PDF by taking a screenshot of a UIView, this is currently working great on the iPad3 with the retina display, but when testing on other devices with lower resolution screens I am having problems with text resolution.
Here is my code:
//start a new page with default size and info
//this can be changed later to include extra info.
UIGraphicsBeginPDFPage();
//render the view's layer into an image context
//the last option specifies scale. If 0, it uses the devices scale.
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 2.0);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage *screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//render the screenshot into the pdf page CGContext
[screenShot drawInRect:view.bounds];
//close the pdf context (saves the pdf to the NSData object)
UIGraphicsEndPDFContext();
I have also tried to set the UIGraphicsBeginImageContextWithOptions scale to 2.0, but this gives no change. How can I force a view on an iPad2 to render at 2x resolution?
Expected output:
Actual output:
I ended up fixing this by recursively setting the contentScaleFactor property of the parent view and its subviews to 2.0.
The UIImage was rendering at the correct resolution, but the layer wasn't when renderInContext was being called.
I am new to Objective-C and I want to let a screenshot made with
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
...
UIGraphicsBeginImageContext(imageSize);
...
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
"fly in" (rotating and stoping in random angle) inside an own image container, looking like a small photo (shadow, white border). how could I realize that?
Since you have a vague question - and your code shows only how you are getting the screen image - here's a general answer.
Put the image on a CAImageLayer
Add a border or a shadow or whatever other chrome you need to your image layer (or on another layer underneath it)
Use Core Animation to animate this CAImageLayer to whatever position or state you want it to be in.
You can accomplish it with OpenGL ES.
Here is a great project which provides generic base OpenGL ES view transition class.
https://github.com/epatel/EPGLTransitionView
There are several predefined transitions, you can take as an example https://github.com/epatel/EPGLTransitionView/tree/master/src
Demo3Transition actually shows how to rotate the screenshot
https://github.com/epatel/EPGLTransitionView/blob/master/src/Demo3Transition.m
You can view available transitions in actions if you launch DemoProject.
I am working on one iPhone App wherein I need to make a portion of the image transparent by setting its alpha level to 0 as the user moves around his finger on the image. Basically if you happen to know the app store application iSteam, user should be able to move his finger around on a top image which will make the background image transparent.
Currently I am using two UIImageView. One that holds the background image and the other on top of it which holds a darker image. Now user should be able to draw random curves on this darker image which will make the part of the background image appear on top. I am not able to figure out how should go about making the top image transparent which the top most of two UIImageView holds.
Any idea on this? Also what should I use for this? Quartz or Open GL. I am a newbie to the iPhone App Dev and have absolutely no idea about these APIs so some guidance from the experts will surely help me getting ahead with iPhone SDK Development.
The UIImageView has a layer which you can refer to as its layer and talk to when you've linked your project to QuartzCore. As the user moves a finger, clip a clear shape in an opaque-color-filled graphics context the same size as the UIImageView, turn that into a CGImageRef, set that as a CALayer's contents (again this CALayer needs to be the same size as the UIImageView), and set that layer as the UIImageView's layer.mask. Wherever the mask is clear, that punches a transparent hole in the layer, which means the view, which means the image the UIImageView is showing. (If that doesn't work, because the UIImageView doesn't like your interfering with its layer, you can use a superview of the UIImageView instead.)
EDIT (next day) - Here's sample code for a layer's delegate that punches a circular hole in the center:
-(void)drawLayer:(CALayer *)layer inContext:(CGContextRef)c {
CGRect r = CGContextGetClipBoundingBox(c);
CGRect r2 = CGRectInset(r, r.size.width/2.0 - 10, r.size.height/2.0 - 10);
UIImage* maskim;
{
UIGraphicsBeginImageContextWithOptions(r.size, NO, 0);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextAddEllipseInRect(c, r2);
CGContextAddRect(c, r);
CGContextEOClip(c);
CGContextSetFillColorWithColor(c, [UIColor blackColor].CGColor);
CGContextFillRect(c, r);
maskim = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
CALayer* mask = [CALayer layer];
mask.frame = r;
mask.contents = (id)maskim.CGImage;
layer.mask = mask;
}
So, if that layer is a view's layer, and if the UIImageView is that view's subview, a hole is punched in the UIImageView.
Here's a screen shot of the result: