Adding a UIImage from a UIGraphics Context to a Tweet - objective-c

I have a game that I've been working on for iOS. We let our users tweet the results of their games and I thought it'd be fun to add a badge or something to the tweet to show details.
I create an image using UIKit. Then, I attach that image in iOS 6.0 with -[SLComposeViewController addImage:] or in iOS 5.* with [TWTweetComposeViewController addImage:] but neither of them will attach an the image.
If I use Facebook or Weibo, the image attaches fine. With Twitter, no luck at all.
Has anybody had any luck attaching an image to a tweet?

If you're receiving NO for the return value of TWTweetComposeViewController, the documentation says: YES if successful. NO if image does not fit in the currently available character space or the view was presented to the user.

Related

Pull Single Frame from Video Feed (DJI Mobile SDK)

I am making a DJI Mobile SDK app and have setup an application that gets live video from the drone and displays it in a view, but I need to pull a single frame from the video feed to work with and cannot figure out how to do it!
One method would be to take a picture with the drone and then download it from the SD card, but I do not require the full resolution image and it feels like there must be a simple method to just get a single frame from the video preview.
The code which casts the video stream is:
-(void)videoFeed:(DJIVideoFeed *)videoFeed didUpdateVideoData:(NSData *)videoData {
[[DJIVideoPreviewer instance] push:(uint8_t *)videoData.bytes length:(int)videoData.length];
}
any ideas on how to pull an individual from from the feed? Or maybe is there a way to have an iOS app just take a screenshot and work with that?
Thanks!
Im not very familiar with IOS. for android there is a sample which use DJI msdk to grab the still images and use the image for Panorama stitching https://github.com/DJI-Mobile-SDK-Tutorials/Android-PanoramaDemo.
The equivalent IOS version of Panorama stitching is here. https://github.com/DJI-Mobile-SDK-Tutorials/iOS-PanoramaDemo
Maybe you can get idea on how to grab the still image from there.
There are several threads about this in android.
Ios would not be different i think.
how to get bitmap data from drone camera stream. android application
Get the bitmap from the fpvWidget is by far the simpliest and fastest solution.
public Bitmap getFrameBitmap() {
return fpvWidget.getBitmap();
}

Thumbnail MKMapView without Google Logo

I am in the process of developing a thumbnail MKMapView to show a singular point on the map. However, as the thumbnail is only 70x61px, the google logo takes up a large proportion of the map.
Please can you tell me a way of using the MKMapView so that the google logo is less visible or can't been seen, but avoiding app rejection, or any alternatives to using the MKMapView?
Thanks in advanced.
How it looks at the moment:
Have you looking into the Google Maps Static API? It returns regular jpeg maps rather than interactive ones. You might be able to craft a URL that gets you a small enough image for your thumbnail. I don't know whether that would be ok according to their license or not.
Start developing with the iOS 6 beta. There are significant changes to MapKit that removes Google as the data provider (and thus their logo). The final version of iOS 6 and it's SDK will be released in the next couple of weeks. So you will also be good to go submitting an iOS 6 app soon.

iOS Tweet Uncompressed Image with Transparency

I'm working on an application that handles image editing, and I'm at the point where I'm trying to integrate twitter. So far it has worked great and I can send a tweet from within the app and attach the image the user is editing. The drawback that I've noticed, is that the image gets auto-compressed. This means that the PNG the user is editing, if it has transparency, no longer will have transparency. This isn't good. Is there a way around this? I would like to be able to send a tweet and attach my PNG image WITH transparency, basically keep it from converting to a JPG once sent.
Here's the code I have so far. Very self-explanatory and straightforward.
SLComposeViewController *tweetSheet = [SLComposeViewController composeViewControllerForServiceType: SLServiceTypeTwitter];
[tweetSheet addImage:self.workingImage];
[self presentModalController:tweetSheet animated:YES completion:nil];
self.workingImage is the image the user is working on.
EDIT: I've updated the above code to work on iOS6, and seem to have the exact same problem (which isn't too surprising I guess). It looks like once the image is on Twitter, it is in JPG format. Is there any way to keep in PNG format?
I'd hate to lose all of this simple code only to go down the route of using a 3rd party image hosting site.
EDIT 2: I've now converted all of my code to no longer use the alpha channel. This means that I no longer care if the image is in the format of PNG or JPEG, because all 3 RGB channels will always exist. Posting a tweet still compresses the image before posting it, no matter what quality the original image was.
I even posted an image to twitter using the app, had it compressed by twitter, saved the image and tried to repeat using the newly compressed image, yet twitter still compressed!
I'm lost on this. Will twitter (or even facebook) compress images no matter what? Will my only option be a third party image hosting site? I'd hate to lose all of the nice social features the iOS6 framework has built into it to instead use a third party site...
It's a twitter side problem. It compress your image regardless. Maybe you should consider uploading the .png to your own server then post a link of it within the tweet.
you can also use other image hosting services..

AVFoundation capture UIImage

I'm trying to capture one or more UIImages programmatically using AVFoundation.
I set up the sessions and input devices and everything, but when I try to find explanations on how to actually take the photos, all I get is buffeled information about connections and what not.
I couldn't find a single example of actually taking photos and saving it to UIImage for further processing. All the example use a constant kCGImagePropertyExifDictionary Which doesn't seems to exist in iOS 5 SDK..
Can someone please provide me with a code or an explanation from top to bottom on how to take and save an image from the front facing camera to a UIImage using AVFoundation?
Thanks alot!
To use kCGImagePropertyExifDictionary, you should #import <ImageIO/ImageIO.h>.
All of the other information you seek is inside the AVFoundation Programming guide - particularly the Media Capture section.

Save edited image to the iPhone gallery

am working on an app where i am loading one image from the bundle and with the help of quartz i am adding two red circles on that image, the image is loaded in the instance of the UIImageVIew class.
I want to save the image with the red circle, so far i have found only one method and thats
UIImageWriteToSavedPhotosAlbum(UIImage * image,id completionTarget,SEL completionSelector, void * contextInfo)
but its not working as it is saving the image from the bundle only, in some post i have read that it can be done using CALayer but i found no useful link to do that, i would be glad if you can post me the answer or any tutorial link to do that.
Here's a trick! Refer to Capture UIView here, you can capture the UIImageView snapshot and save it using UIImageWriteToSavedPhotosAlbum.