I'm trying to make a program where I need to show images and videos on an external screen. So I have a Table View where I can enter names and links to video files (mp4 for now) and image files (jpg for now).
I can't find the way to display still images in AVPlayer.
So to visualize the video files or the images, I created an AVPlayer and an UIImageView which have the same sizes. The AVPlayer is placed above the UIImageView.
If I want to display an image, I hide the AVPlayer.
if ([self selectedVideoURL]!=nil){
NSString *myString = [[self selectedVideoURL] absoluteString];
NSString *extension = [myString substringFromIndex: [myString length] - 3];
if (![extension isEqual:#"jpg"]){
self.playerView.hidden = false;
[self playingVideo:[self selectedVideoURL]];
}
else{
self.playerView.hidden = true;
[self displayingImage:[self selectedVideoURL]];
}
}
Is there any way to make it simpler?
Thanks...
Related
First I couldn't find anyone else having this problem. Working on a game in spritekit - there is a mainTitle.h/m and gamePlay.h/m files. Below is code for the share button to share your progress via text, fb, twitter, ect. The code below is located in the gamePlay.m inside a touch method. The code works however after the user selects to send his/her score view text message - the new message window slides up and then the game appears to restart and load mainTitle.m scene. Any ideas as to why this happens?
-(void)share {
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 1.0);
[self.view.drawViewHierarchyInRect:self.view.boundsafterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSString *message = [NSString stringWithFormat:#"messge"];
NSString *urlString = [NSString stringWithFormat#"www..."];
NSURL *gmURL = [NSURL URLWithString:urlString];
UIActivityViewController *actVC = [[UIActivityViewController alloc]
initWithActivityItems:#[message, gmURL, image] applicationActivites:nil];
actVC.excludedActivityTypes = #[UIActivityTypePrint, UIActivityTypeAriDrop];
UIViewController *viewControl = self.view.window.rootViewController;
[viewControl presentViewController:actVC animated:YES completion:nil];
}
-(void)touchBegins ... {
[self share];
}
Probably its not a good idea to call share method in touchBegan, since it might be fired multiple times in some conditions. Use i.e. UIButton instead
I am making app like Instagram .i am using GPUImage framework ,in this i have to take photos and videos and share. i able to capture photos using this framework and now i have to capture video but i am struggling how to change camera mode photos to video. any help and tutorial then its very good for me. I used this code camera for photos mode.
if([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera])
{
self.imagepicker.sourceType = UIImagePickerControllerSourceTypeCamera;
[[NSBundle mainBundle] loadNibNamed:#"OverlayView" owner:self options:nil];
self.overlayView.frame = self.imagepicker.cameraOverlayView.frame;
self.imagepicker.cameraOverlayView = self.overlayView;
self.overlayView = nil;
CGSize result = [[UIScreen mainScreen] bounds].size;
self.imagepicker.showsCameraControls = NO;
self.imagepicker.allowsEditing = NO;
self.imagepicker.wantsFullScreenLayout = NO;
// self.imagepicker.mediaTypes = [[NSArray alloc] initWithObjects: (NSString *) kUTTypeMovie, nil];
}
else{
self.imagepicker.sourceType = UIImagePickerControllerSourceTypePhotoLibrary;
}
In my case, I'm using GPUImage to do both (pictures and videos). Therefore I've created two objects: one of type GPUImageStillCamera(pictures) and other of type GPUImageVideoCamera (videos).
So whenever you need to switch between cameras you basically stop the GPUImageStillCamera capture and initialize a video camera (note that you have to adapt this snippet to your project):
func initializeVideoCamera() {
// Stop the capture of GPUImageStillCamera
stillCamera.stopCameraCapture()
videoCamera = GPUImageVideoCamera.init(sessionPreset: AVCaptureSessionPreset1920x1080, cameraPosition: .Back)
videoCamera?.outputImageOrientation = .Portrait
videoCamera?.addTarget(filter)
// If a file already exists, AVAssetWriter won't let you record new frames, so delete the old movie
unlink(pathToMovieFile)
initializeWriteWithPath(pathToMovieFile)
videoCamera?.startCameraCapture()
}
I am creating an app where a user can load their schedule into the app, and it is subsequently displayed.
When the image is allowed to be edited before loading, it shows up perfectly:
Once the user "chooses" the image, it shows up in the UIImageView blurred:
Here it is slightly zoomed in:
I know that the image resolution is okay because image displays perfectly beforehand. How can I stop this from being blurred?
I am using the basic method of zooming an UIImageView in an UIScrollView.
Here is the code I use to assign the image. zoomscroll is the UIScrollView and myschedule is the UIImageView:
-(void) imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
[self dismissModalViewControllerAnimated:YES];
//Obtaining saving path
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *imagePath = [documentsDirectory stringByAppendingPathComponent:#"myschedule.png"];
//Extracting image from the picker and saving it
NSString *mediaType = [info objectForKey:UIImagePickerControllerMediaType];
if ([mediaType isEqualToString:#"public.image"]){
UIImage *editedImage = [info objectForKey:UIImagePickerControllerEditedImage];
NSData *webData = UIImagePNGRepresentation(editedImage);
[webData writeToFile:imagePath atomically:YES];
myschedule.image = editedImage;
[zoomscroll addSubview:myschedule];
zoomscroll.contentSize = CGSizeMake(myschedule.frame.size.width , myschedule.frame.size.height);
zoomscroll.minimumZoomScale = 1.0;
zoomscroll.maximumZoomScale = 4.0;
}
}
Thank you!
iOS doesn't redraw views when it zooms them, it just scales the view up or down. The underlying implementation is basically a textured OpenGL polygon, which is why zooming is so fast. Regenerating the texture at a higher res is slow, so iOS doesn't do that unless you explicitly tell it to.
There are various ways you can fix this. The simplest is probably to set the contentSize of your scrollview to the actual size of the image and then zoom out initially, so that instead of zooming a small version of the image up to 400% (which results in blurring) the user is zooming back in from 25% up to 100%. Something like this:
myschedule.image = editedImage;
myschedule.frame = CGRectMake(0, 0, editedImage.size.width, editedImage.size.height);
[zoomscroll addSubview:myschedule];
zoomscroll.contentSize = editedImage.size;
zoomscroll.minimumZoomScale = 0.25;
zoomscroll.maximumZoomScale = 1.0;
zoomscroll.zoomScale = zoomscroll.minimumZoomScale;
I'm trying to work out how to do this.
NOTE: I'm not an experienced objective-c developer (hence why I'm using PhoneGap in the first place)
The short of it: My UIWebView (no, not the PhoneGap one that renders the webapp, a 2nd UIWebView created in-memory and not visible) is not rendering into the PDF. I just get an blank PDF. I'll post some of my thinking and code, and hopefully someone will know what I'm doing wrong.
My starting place is that there is already a print plugin for PhoneGap here:
https://github.com/phonegap/phonegap-plugins/tree/master/iPhone/PrintPlugin
This plugin creates a UIWebView on-the-fly, you pass some HTML to it via JavaScript, and then it calls some print controller to do the printing.
So I borrowed some ideas from that. Then I noticed this awesome blog post on generating PDF's
http://www.ioslearner.com/convert-html-uiwebview-pdf-iphone-ipad/
So I'm trying to combine the two into my own PhoneGap plugin for taking some HTML (from my webapp) and generating a PDF on-the-fly.
HEADER:
#import <Foundation/Foundation.h>
#import <QuartzCore/QuartzCore.h>
#ifdef PHONEGAP_FRAMEWORK
#import <PhoneGap/PGPlugin.h>
#else
#import "PGPlugin.h"
#endif
#interface ExportPlugin : PGPlugin <UIWebViewDelegate> {
NSString* exportHTML;
}
#property (nonatomic, copy) NSString* exportHTML;
//This gets called from my HTML5 app (Javascript):
- (void) exportPdf:(NSMutableArray*)arguments withDict:(NSMutableDictionary*)options;
#end
MAIN:
#import "ExportPlugin.h"
#interface ExportPlugin (Private)
-(void) doExport;
-(void) drawPdf;
#end
#implementation ExportPlugin
#synthesize exportHTML;
- (void) exportPdf:(NSMutableArray*)arguments withDict:(NSMutableDictionary*)options{
NSUInteger argc = [arguments count];
if (argc < 1) {
return;
}
self.exportHTML = [arguments objectAtIndex:0];
[self doExport];
}
int imageName = 0;
double webViewHeight = 0.0;
- (void) doExport{
//Set the base URL to be the www directory.
NSString *dbFilePath = [[NSBundle mainBundle] pathForResource:#"www" ofType:nil ];
NSURL *baseURL = [NSURL fileURLWithPath:dbFilePath];
//Load custom html into a webview
UIWebView *webViewExport = [[UIWebView alloc] init];
webViewExport.delegate = self;
//[webViewExport loadHTMLString:exportHTML baseURL:baseURL];
[webViewExport loadHTMLString:#"<html><body><h1>testing</h1></body></html>" baseURL:baseURL];
}
- (BOOL)webView:(UIWebView *)theWebView shouldStartLoadWithRequest:(NSURLRequest *)request navigationType:(UIWebViewNavigationType)navigationType
{
return YES;
}
- (void)webViewDidFinishLoad:(UIWebView *)webViewExport
{
webViewHeight = [[webViewExport stringByEvaluatingJavaScriptFromString:#"document.body.scrollHeight;"] integerValue];
CGRect screenRect = webViewExport.frame;
//WHY DO I HAVE TO SET THE SIZE? OTHERWISE IT IS 0
screenRect.size.width = 768;
screenRect.size.height = 1024;
double currentWebViewHeight = webViewHeight;
while (currentWebViewHeight > 0)
{
imageName ++;
UIGraphicsBeginImageContext(screenRect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
//[[UIColor blackColor] set];
//CGContextFillRect(ctx, screenRect);
[webViewExport.layer renderInContext:ctx];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *pngPath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:#"%d.png",imageName]];
if(currentWebViewHeight < 960)
{
CGRect lastImageRect = CGRectMake(0, 960 - currentWebViewHeight, webViewExport.frame.size.width, currentWebViewHeight);
CGImageRef imageRef = CGImageCreateWithImageInRect([newImage CGImage], lastImageRect);
newImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
}
[UIImagePNGRepresentation(newImage) writeToFile:pngPath atomically:YES];
[webViewExport stringByEvaluatingJavaScriptFromString:#"window.scrollBy(0,960);"];
currentWebViewHeight -= 960;
}
[self drawPdf];
}
- (void) drawPdf
{
CGSize pageSize = CGSizeMake(612, webViewHeight);
NSString *fileName = #"Demo.pdf";
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *pdfFileName = [documentsDirectory stringByAppendingPathComponent:fileName];
UIGraphicsBeginPDFContextToFile(pdfFileName, CGRectZero, nil);
// Mark the beginning of a new page.
UIGraphicsBeginPDFPageWithInfo(CGRectMake(0, 0, pageSize.width, pageSize.height), nil);
double currentHeight = 0.0;
for (int index = 1; index <= imageName ; index++)
{
NSString *pngPath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:#"%d.png", index]];
UIImage *pngImage = [UIImage imageWithContentsOfFile:pngPath];
[pngImage drawInRect:CGRectMake(0, currentHeight, pageSize.width, pngImage.size.height)];
currentHeight += pngImage.size.height;
}
UIGraphicsEndPDFContext();
}
#end
The first indication something is not right, is above I have to set the UIWebView.frame size:
screenRect.size.width = 768;
screenRect.size.height = 1024;
But why? The PhoneGap PrintPlugin doesn't have to do this. If I don't set it, the size is 0, and then I get lots of context errors.
And then the next problem is that the UIWebView is not rendering anything. A symptom of the first problem perhaps?
How do I go about debugging this and working out what the problem is?
UPDATE
I'm pretty sure that it may be impossible to render the UIWebView layer into the image context, unless that UIWebView is actually visible.
I'm not sure how the PhoneGap PrintPlugin works then. It seems to render it's UIWebView quite fine with it not being visible.
I'm currently experimenting with rendering the actual PhoneGap UIWebView into the PDF (as opposed to my own UIWebView). But this is not ideal.
It means I have to hide all toolbars and whatnot, and then pan the UIWebView around so I capture everything outside the viewport. This is not ideal, because the user will visually see this occurring!
Point 1 above doesn't seem to work anyway, because iPad is too slow to update the screen when dynamically fiddling with the layout. On iPad, if you do visual things very quickly, (like panning the screen around) the iPad is too slow and just wont show it. You end up only seeing the end state. So when I take the screenshots, the screen visually hasn't panned (even though the DOM says it has). (Hope that makes sense).
Agh, frustrating.
I've got a working solution now, but it's not ideal.
What I do is render the phonegap UIWebView into the PDF.
To do this is quite tricky. I have a couple of objective-c functions
- (void) takeScreenshot;
- (void) renderPdf;
that I call from Javascript.
Then I have to write a recursive JS algorithm that pans the screen in every direction whilst calling takeScreenshot.
In between calls to takeScreenshot I use setTimeout which gives a 20 millisecond break in the JS processing - enough time for the iPad to update the screen so the next screenshot can be taken.
It was a royal pain in the arse. Bounty is still open in case someone knows of a better way of dealing with this - I would be very curious to know!
If you want to render a UIWebView into a PDF, I think you could go for this :
1/ use the convertRect:fromView method implemented by your UIWebView object to get the CGRect
2/ see the UIPrintPageRenderer Class Reference to make like a print preview
3/ Use UIGraphicsGetCurrentContext to get the CGContextRef out of it
4/ create the PDF from the CGRect and CGContextRef (you can use the help provided in the Apple sample code zoomingPDFViewer for building PDF using CGPdf
Here is how to render PDF in a UIWebView (webPage being your UIWebView), your delegate (here "self") could implement the UIWebViewDelegate protocol :
- (void)loadingPDFwithURL:(NSURL *)anURL {
CGRect appFrame = [[UIScreen mainScreen] applicationFrame];
appFrame.origin.y = 0;
[self.webPage initWithFrame:appFrame];
[self.webPage setScalesPageToFit:YES];
[self.webPage setDelegate:self];
NSURLRequest *requestObj = [NSURLRequest requestWithURL:anURL];
[self.webPage loadRequest:requestObj];
[self.view addSubview:self.webPage];
}
So you're giving the URL of the PDF (if it's in memory, just write the file in your application as the URL can be a filepath).
Another possibility is to dive into CGPdf but it's getting harder.
I have some pretty basic code to capture a still image using AVFoundation.
AVCaptureDeviceInput *newVideoInput = [[AVCaptureDeviceInput alloc] initWithDevice:[self backFacingCamera] error:nil];
AVCaptureStillImageOutput *newStillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:
AVVideoCodecJPEG, AVVideoCodecKey,
nil];
[newStillImageOutput setOutputSettings:outputSettings];
[outputSettings release];
AVCaptureSession *newCaptureSession = [[AVCaptureSession alloc] init];
[newCaptureSession beginConfiguration];
newCaptureSession.sessionPreset = AVCaptureSessionPreset640x480;
[newCaptureSession commitConfiguration];
if ([newCaptureSession canAddInput:newVideoInput]) {
[newCaptureSession addInput:newVideoInput];
}
if ([newCaptureSession canAddOutput:newStillImageOutput]) {
[newCaptureSession addOutput:newStillImageOutput];
}
self.stillImageOutput = newStillImageOutput;
self.videoInput = newVideoInput;
self.captureSession = newCaptureSession;
[newStillImageOutput release];
[newVideoInput release];
[newCaptureSession release];
My method that captures the still image is also pretty simple and prints out the orientation which is AVCaptureVideoOrientationPortrait:
- (void) captureStillImage
{
AVCaptureConnection *stillImageConnection = [AVCamUtilities connectionWithMediaType:AVMediaTypeVideo fromConnections:[[self stillImageOutput] connections]];
if ([stillImageConnection isVideoOrientationSupported]){
NSLog(#"isVideoOrientationSupported - orientation = %d", orientation);
[stillImageConnection setVideoOrientation:orientation];
}
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:stillImageConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
ALAssetsLibraryWriteImageCompletionBlock completionBlock = ^(NSURL *assetURL, NSError *error) {
if (error) { // HANDLE }
};
if (imageDataSampleBuffer != NULL) {
CFDictionaryRef exifAttachments = CMGetAttachment(imageDataSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments) {
NSLog(#"attachements: %#", exifAttachments);
} else {
NSLog(#"no attachments");
}
self.stillImageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
self.stillImage = [UIImage imageWithData:self.stillImageData];
UIImageWriteToSavedPhotosAlbum(self.stillImage, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
}
else
completionBlock(nil, error);
}];
}
So the device understands it's in portrait mode as it should be, the exif attachements show me:
PixelXDimension = 640;
PixelYDimension = 480;
so it seems to know that we're in 640x480 and that means WxH (obviously...)
However when I email the photo to myself from Apples Photos app, I get a 480x640 image if I check the properties in Preview. This didn't make any sense to me until I dug further into image properties to find out that the image orientation is set to "6 (Rotated 90 degrees CCW)" I'm sure CCW is counter clockwise
So looking at the image in a browser:
http://tonyamoyal.com/stuff/things_that_make_you_go_hmm/photo.JPG
We see a the image rotated 90 degrees CCW and it is 640x480.
I'm really confused about this behavior. When I take a 640x480 still image using AVFoundation, I would expect the default to have no rotated orientation. I expect a 640x480 image oriented exactly as my eye sees the image in the preview layer. Can someone explain why this is happening and how to configure the capture so that when I save my image to the server to later display in a web view, it is not rotated 90 degrees CCW?
This happens because the orientation set in the metadata of the new image is being affected by the orientation of the AV system that creates it. The layout of the actual image data is, of course, different from the orientation mentioned in your metadata. Some image viewing programs respect the metadata orientation, some ignore it.
You can affect the metadata orientation of the AV system by calling:
AVCaptureConnection *videoConnection = ...;
if ([videoConnection isVideoOrientationSupported])
[videoConnection setVideoOrientation:AVCaptureVideoOrientationSomething];
You can affect the metadata orientation of a UIImage by calling:
UIImage *rotatedImage = [[UIImage alloc] initWithCGImage:image.CGImage scale:1.0f orientation:UIImageOrientationSomething];
But the actual data from the AVCapture system will always appear with the wider dimension as X and the narrower dimension as Y, and will appear to be oriented in LandscapeLeft.
If you want the actual data to line up with what your metadata claims, you need to modify the actual data. You can do this by writing the image out to a new image using CGContexts and AffineTransforms. Or there is an easier workaround. Use the UIImage+Resize package as discussed here. And resize the image to it's current size by calling:
UIImage *rotatedImage = [image resizedImage:CGSizeMake(image.size.width, image.size.height) interpolationQuality:kCGInterpolationDefault];
This will rectify the data's orientation as a side effect.
If you don't want to include the whole UIImage+Resize thing you can check out it's code and strip out the parts where the data is transformed.