I have this Obj-C code:
What is need is to transform it to the Swift code.
(so far I did manage to rewrite C code to Swift, however, this is some kind of callback and sadly, I have not found any way to successfully transform it)
GPUImageRawDataOutput *rawDataOutput = [[GPUImageRawDataOutput alloc] initWithImageSize:CGSizeMake(640.0, 480.0) resultsInBGRAFormat:YES];
_rawDataOutput = rawDataOutput; // __unsafe_unretained instance variable
[rawDataOutput setNewFrameAvailableBlock:^{
GLubyte *outputBytes = [_rawDataOutput rawBytesForImage];
NSInteger bytesPerRow = [_rawDataOutput bytesPerRowInOutput];
NSLog(#"Bytes per row: %d", bytesPerRow);
NSData *dataForRawBytes = [NSData dataWithBytes:outputBytes length:/* I can't figure out how to get the length of the GLubyte pointer */];
[_targetUIImageView setImage:[UIImage imageWithData:dataForRawBytes]]; // An instance UIImageView
}];
Coding blind from cellphone. Give it a try with some help of compiler
let rawDataOutput = GPUImageRawDataOutput(CGSizeMake(640.0, 480.0),resultsInBGRAFormat:true)
self.rawDataOutput = rawDataOutput
rawDataOutput.setNewFrameAvailableBlock({(frame) IN
let outputBytes = self.rawDataOutput.rawBytesForImage
let bytesPerRow = self.rawDataOutput.bytesPerRowInOutput
print("Bytes per row \(bytesPerRow)")
let dataForRawBytes = NSData(bytes:outputBytes ,length:outputBytes.count)
self.targetUIImageView.setImage(UIImage(imageWithData:dataForRawBytes))
})
Related
I am trying to get info on all the albums/photos using the PHPhotoLibrary. I barely know objective C, and i've looked at some tutorial/sample but couldn't find everything that I needed.
Here is a link to the sample code I based my code on.
https://developer.apple.com/library/ios/samplecode/UsingPhotosFramework/Introduction/Intro.html#//apple_ref/doc/uid/TP40014575-Intro-DontLinkElementID_2
So far I was able to get the albums name and identifier. And I am getting a list of photos, I am able to get their identifier as well, but not the filename. But if I put a break point in my fonction and look at my PHAsset pointer values, I can see the filename there (inside _filename), but if I try to call the variable with the filename in it, the variable does not exist.
So if anyone can provide a sample code to get all info on albums/photos/thumbnail that would be awesome. Or just getting the filename would be a good help.
Here is the code I have tried so far:
-(void)awakeFromNib{
NSMutableArray *allPhotos = self.getAllPhotos;
for (int x = 0; x < allPhotos.count; x ++)
{
PHAsset *photo = [self getPhotoAtIndex:x];
PHAssetSourceType source = photo.sourceType;
NSString *id = photo.localIdentifier;
NSString *description = photo.description;
NSUInteger height = photo.pixelHeight;
NSUInteger width = photo.pixelWidth;
NSLog(#"Test photo info");
}
}
-(PHAsset*) getPhotoAtIndex:(NSInteger) index
{
return [self.getAllPhotos objectAtIndex:index];
}
-(NSMutableArray *) getAllPhotos
{
NSMutableArray *photos = [[NSMutableArray alloc] init];
PHFetchOptions *allPhotosOptions = [[PHFetchOptions alloc] init];
allPhotosOptions.sortDescriptors = #[[NSSortDescriptor sortDescriptorWithKey:#"creationDate" ascending:YES]];
PHFetchResult *allPhotos = [PHAsset fetchAssetsWithOptions:allPhotosOptions];
PHFetchResult *fetchResult = #[allPhotos][0];
for (int x = 0; x < fetchResult.count; x ++) {
PHAsset *asset = fetchResult[x];
photos[x] = asset;
}
return photos;
}
As you can see, I can get the image height and width, its id, but cannot get the url to it.
I have found a way to get the url of my photo.
-(void)getImageURL:(PHAsset*) asset
{
PHContentEditingInputRequestOptions *options = [[PHContentEditingInputRequestOptions alloc] init];
[options setCanHandleAdjustmentData:^BOOL(PHAdjustmentData *adjustmentData) {
return [adjustmentData.formatIdentifier isEqualToString:AdjustmentFormatIdentifier] && [adjustmentData.formatVersion isEqualToString:#"1.0"];
}];
[asset requestContentEditingInputWithOptions:options completionHandler:^(PHContentEditingInput *contentEditingInput, NSDictionary *info)
{
NSURL* url = contentEditingInput.fullSizeImageURL;
}];
}
Filenames in the Photos library are an implementation detail and subject to change. There are various private API for discovering them (or ways to use valueForKey or other public introspection APIs to find where they're hidden), they aren't something to be relied upon. In particular, an asset that's been edited is likely to have a different filename than the original.
What do you need a filename/URL for? If you're just uniquely identifying the asset across launches of your app, use localIdentifier. If you're showing it to the user... why? Something like IMG_0234.jpg vs IMG_5672.jpg has little meaning to the average user.
To fetch the assets in a specific album, use fetchAssetsInAssetCollection:options:. To fetch the album(s) containing a specific asset, use fetchAssetCollectionsContainingAsset:withType:options:. To discover the list(s) of albums, use other APIs on PHAssetCollection and its superclass PHCollection.
In my game, one player makes an array of skspritenodes in a random order that I need to send to all other players in the match. I am not sure if this is possible or if I need a workaround that does the same thing, but in examples, I have seen a typedef struct containing pointers like the one below
typedef struct {
Message message;
NSString * piece;
SKSpriteNode * pieces;
} MessagePieceList;
The code above gives me the error "ARC forbids Objective-C objects in structs" when I try creating the NSString or SKSpriteNode. I have also tried
__unsafe_unretained NSString * piece;
but then I believe I get an empty value when it reaches the other players in my didReceiveData since it is not retained. I've been able to send an integer no problem by doing the following
GameScene.m has:
_networkingEngine = [[MultiplayerNetworking alloc] init];
int piece = 5;
[_networkingEngine sendpiecesOrder:piece];
MultiplayerNetworking.h has:
-(void)sendpiecesOrder:(int)piece1;
MultiplayerNetworking.m has:
typedef struct {
Message message;
int piece;
} MessageList;
-(void)sendpiecesOrder:(int)piece {
MessageList messagePiece;
messagePiece.message.messageType = kMessagePieceOrder;
messagePiece.piece = piece;
NSData *data = [NSData dataWithBytes:&messagePiece length:sizeof(MessageList)];
[self sendData:data];
} else if(message->messageType == kMessagePieceOrder) {
MessageList *messagePieceList1 = ( MessageList *)[data bytes];
NSLog(#"Pieces order message received");
NSLog(#"%d", messagePieceList1->piece);
This prints 5 when the other player(s) receives the data in their didReceiveData method. If I cant directly send the array of SKSpriteNodes, is there a way to mold this approach so that it somehow communicates the list of SKSpriteNodes to another player?
I've done an exhaustive search on this and I know similar questions have been posted before about NSBitmapImageRep, but none of them seem specific to what I'm trying to do which is simply:
Read in an image from the desktop (but NOT display it)
Create an NSBitmap representation of that image
Iterate through the pixels to change some colours
Save the modified bitmap representation as a separate file
Since I've never worked with bitmaps before I thought I'd just try to create and save one first, and worry about modifying pixels later. That seemed really straightforward, but I just can't get it to work. Apart from the file saving aspect, most of the code is borrowed from another answer found on StackOverflow and shown below:
-(void)processBitmapImage:(NSString*)aFilepath
{
NSImage *theImage = [[NSImage alloc] initWithContentsOfFile:aFilepath];
if (theImage)
{
CGImageRef CGImage = [theImage CGImageForProposedRect:nil context:nil hints:nil];
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithCGImage:CGImage];
NSInteger width = [imageRep pixelsWide];
NSInteger height = [imageRep pixelsHigh];
long rowBytes = [imageRep bytesPerRow];
// above matches the original size indicating NSBitmapImageRep was created successfully
printf("WIDE pix = %ld\n", width);
printf("HIGH pix = %ld\n", height);
printf("Row bytes = %ld\n", rowBytes);
// We'll worry about this part later...
/*
unsigned char* pixels = [imageRep bitmapData];
int row, col;
for (row=0; row < height; row++)
{
// etc ...
for (col=0; col < width; col++)
{
// etc...
}
}
*/
// So, let's see if we can just SAVE the (unmodified) bitmap first ...
NSData *pngData = [imageRep representationUsingType: NSPNGFileType properties: nil];
NSString *destinationStr = [self pathForDataFile];
BOOL returnVal = [pngData writeToFile:destinationStr atomically: NO];
NSLog(#"did we succeed?:%#", (returnVal ? #"YES": #"NO")); // the writeToFile call FAILS!
[imageRep release];
}
[theImage release];
}
While I like this code for its simplicity, another potential issue down the road might be that Apple docs advise us treat bitmaps returned with 'initWithCGImage' as read-only objects…
Can anyone please tell me where I'm going wrong with this code, and how I could modify it to work. While the overall concept looks okay to my non-expert eye, I suspect I'm making a dumb mistake and overlooking something quite basic. Thanks in advance :-)
That's a fairly roundabout way to create the NSBitmapImageRep. Try creating it like this:
NSBitmapImageRep* imageRep = [NSBitmapImageRep imageRepWithContentsOfFile:aFilepath];
Of course, the above does not give you ownership of the image rep object, so don't release it at the end.
Im new developer on ObjC and trying to make a fill color app. When I touch on a image, the color will be changed but I got the merory leak with this function need your help:
-(void) updateImageWithColorSelected:(int) pos{
CGImageRef imageRef = self.basicImage.CGImage;
NSData *data = CGDataProviderCopyData(CGImagerGetDataProvider(imageRef));//leak here
Byte *pixels = (Byte *)[data bytes];
//change color...
for(int i = 0; i< IMG_SIZE; i++){
pixels[j] = 255;
}
CGDataProvider provider = CGDataProviderCreateWithData( NULL, pixels, [data length], NULL];
CGImageRef newImageRef = CGImageCreate(w,h....);
self.basicImage = [UIImage imageWithCGImage:newImageRef];
//release newImageRef
CGImagerRelease(newImageRef);
// set basic image to img
[self.img setImage:self.basicImage];
data = nil;
[data release];
}
I try to remove all the code except NSData *data = CGDataProviderCopyData and the app still leak.
Do you guys have any idea how to release "data" ?
Thank you in advance,
}
// set basic image to img
[self.img setImage:self.basicImage];
data = nil;
[data release];
}
You're sending release to a nil pointer.
[data release];
data = nil;
}
This will do better.
Edit: the issue with CGDataProviderCreateWithData
When data is released, the data pointer you passed to CGDataProviderCreateWithData becomes invalid. This is expected. The proper use of this function requires you allocate a buffer for the data and provide a callback to release the data when the provider is released.
The best solution for you is to use CGDataProviderCreateWithCFData instead, taking advantage of the toll-free bridging between Foundation and CoreFoundation objects.
Use:
CGDataProvider provider = CGDataProviderCreateWithCFData( (CFDataRef) data );
Note that at present the data provider created by the call to CGDataProviderCreateWithData() or CGDataProviderCreateWithCFData() is also being leaked, and should be released by calling CGDataProviderRelease(). (This leak is undoubtedly minor compared to the original leaked data.)
Currently, I'm doing a little test project to see if I can get samples from an AVAssetReader to play back using an AudioQueue on iOS.
I've read this:
( Play raw uncompressed sound with AudioQueue, no sound )
and this: ( How to correctly read decoded PCM samples on iOS using AVAssetReader -- currently incorrect decoding ),
Which both actually did help. Before reading, I was getting no sound at all. Now, I'm getting sound, but the audio is playing SUPER fast. This is my first foray into audio programming, so any help is greatly appreciated.
I initialize the reader thusly:
NSDictionary * outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
nil];
output = [[AVAssetReaderAudioMixOutput alloc] initWithAudioTracks:uasset.tracks audioSettings:outputSettings];
[reader addOutput:output];
...
And I grab the data thusly:
CMSampleBufferRef ref= [output copyNextSampleBuffer];
// NSLog(#"%#",ref);
if(ref==NULL)
return;
//copy data to file
//read next one
AudioBufferList audioBufferList;
NSMutableData *data = [NSMutableData data];
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(ref, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
// NSLog(#"%#",blockBuffer);
if(blockBuffer==NULL)
{
[data release];
return;
}
if(&audioBufferList==NULL)
{
[data release];
return;
}
//stash data in same object
for( int y=0; y<audioBufferList.mNumberBuffers; y++ )
{
// NSData* throwData;
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
[self.delegate streamer:self didGetAudioBuffer:audioBuffer];
/*
Float32 *frame = (Float32*)audioBuffer.mData;
throwData = [NSData dataWithBytes:audioBuffer.mData length:audioBuffer.mDataByteSize];
[self.delegate streamer:self didGetAudioBuffer:throwData];
[data appendBytes:audioBuffer.mData length:audioBuffer.mDataByteSize];
*/
}
which eventually brings us to the audio queue, set up in this way:
//Apple's own code for canonical PCM
audioDesc.mSampleRate = 44100.0;
audioDesc.mFormatID = kAudioFormatLinearPCM;
audioDesc.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
audioDesc.mBytesPerPacket = 2 * sizeof (AudioUnitSampleType); // 8
audioDesc.mFramesPerPacket = 1;
audioDesc.mBytesPerFrame = 1 * sizeof (AudioUnitSampleType); // 8
audioDesc.mChannelsPerFrame = 2;
audioDesc.mBitsPerChannel = 8 * sizeof (AudioUnitSampleType); // 32
err = AudioQueueNewOutput(&audioDesc, handler_OSStreamingAudio_queueOutput, self, NULL, NULL, 0, &audioQueue);
if(err){
#pragma warning handle error
//never errs, am using breakpoint to check
return;
}
and we enqueue thusly
while (inNumberBytes)
{
size_t bufSpaceRemaining = kAQDefaultBufSize - bytesFilled;
if (bufSpaceRemaining < inNumberBytes)
{
AudioQueueBufferRef fillBuf = audioQueueBuffer[fillBufferIndex];
fillBuf->mAudioDataByteSize = bytesFilled;
err = AudioQueueEnqueueBuffer(audioQueue, fillBuf, 0, NULL);
}
bufSpaceRemaining = kAQDefaultBufSize - bytesFilled;
size_t copySize;
if (bufSpaceRemaining < inNumberBytes)
{
copySize = bufSpaceRemaining;
}
else
{
copySize = inNumberBytes;
}
if (bytesFilled > packetBufferSize)
{
return;
}
AudioQueueBufferRef fillBuf = audioQueueBuffer[fillBufferIndex];
memcpy((char*)fillBuf->mAudioData + bytesFilled, (const char*)(inInputData + offset), copySize);
bytesFilled += copySize;
packetsFilled = 0;
inNumberBytes -= copySize;
offset += copySize;
}
}
I tried to be as code inclusive as possible so as to make it easy for everyone to point out where I'm being a moron. That being said, I have a feeling my problem occurs either in the output settings declaration of the track reader or in the actual declaration of the AudioQueue (where I describe to the queue what kind of audio I'm going to be sending it). The fact of the matter is, I don't really know mathematically how to actually generate those numbers (bytes per packet, frames per packet, what have you). An explanation of that would be greatly appreciated, and thanks for the help in advance.
Not sure how much of an answer this is, but there will be too much text and links for a comment and hopefully it will help (maybe guide you to your answer).
First off I know with my current project adjusting the sample rate will effect the speed of the sound, so you can try to play with those settings. But 44k is what I see in most default implementation including the apple example SpeakHere. However I would spend some time comparing your code to that example because there are quite a few differences. like checking before enqueueing.
First check out this posting https://stackoverflow.com/a/4299665/530933
It talks about how you need to know the audio format, specifically how many bytes in a frame, and casting appropriately
also good luck. I have had quite a few questions posted here, apple forums, and the ios forum (not the official one). With very little responses/help. To get where I am today (audio recording & streaming in ulaw) I ended up having to open an Apple Dev Support Ticket. Which prior to tackling the audio I never knew existed (dev support). One good thing is that if you have a valid dev account you get 2 incidents for free! CoreAudio is not fun. Documentation is sparse, and besides SpeakHere there are not many examples. One thing I did find is that the framework headers do have some good info and this book. Unfortunately I have only started the book otherwise I may be able to help you further.
You can also check some of my own postings which I have tried to answer to the best of my abilities.
This is my main audio question which I have spent alot of time on to compile all pertinent links and code.
using AQRecorder (audioqueue recorder example) in an objective c class
trying to use AVAssetWriter for ulaw audio (2)
For some reason, even though every example I've seen of the audio queue using LPCM had
ASBD.mBitsPerChannel = 8* sizeof (AudioUnitSampleType);
For me it turns out I needed
ASBD.mBitsPerChannel = 2*bytesPerSample;
for a description of:
ASBD.mFormatID = kAudioFormatLinearPCM;
ASBD.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
ASBD.mBytesPerPacket = bytesPerSample;
ASBD.mBytesPerFrame = bytesPerSample;
ASBD.mFramesPerPacket = 1;
ASBD.mBitsPerChannel = 2*bytesPerSample;
ASBD.mChannelsPerFrame = 2;
ASBD.mSampleRate = 48000;
I have no idea why this works, which bothers me a great deal... but hopefully I can figure it all out eventually.
If anyone can explain to me why this works, I'd be very thankful.