This answer and this answer both show how to set TCP_NODELAY for NSOutputStream in Objective-C. I need some help on getting this to work in Swift, and I believe that it's probably just a mistake I'm making with the API.
This is the Objective-C solution that supposedly works:
CFDataRef nativeSocket = CFWriteStreamCopyProperty(`myWriteStream`, kCFStreamPropertySocketNativeHandle);
CFSocketNativeHandle *sock = (CFSocketNativeHandle *)CFDataGetBytePtr(nativeSocket);
setsockopt(*sock, IPPROTO_TCP, TCP_NODELAY, &(int){ 1 }, sizeof(int));
CFRelease(nativeSocket);
This is my attempt at translating it into Swift:
let nativeSocket: CFDataRef = CFWriteStreamCopyProperty(myWriteStream, kCFStreamPropertySocketNativeHandle).data
let sock = CFSocketNativeHandle(CFDataGetBytePtr(nativeSocket).memory)
var one = Int(1)
setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, &one, UInt32(sizeofValue(one)))
The real issue is getting a CFDataRef from myWriteStream (an NSOutputStream), and then getting a CFSocketNativeHandle from that. In the Swift code above, it always crashes on the first line while trying to create nativeSocket (specifically, trying to access the data property).
Can somebody help me out with this?
Ok, so after a while I figured it out. Here's the working code in case somebody else needs it:
let socketData = CFWriteStreamCopyProperty(self.outputStream!, kCFStreamPropertySocketNativeHandle) as! CFData
let handle = CFSocketNativeHandle(CFDataGetBytePtr(socketData).memory)
var one: Int = 1
let size = UInt32(sizeofValue(one))
setsockopt(handle, IPPROTO_TCP, TCP_NODELAY, &one, size)
Related
I was trying to port the following Swift code to Objective-C:
var contextImage: UIImage? = ...
let image: CGImage? = contextImage?.cgImage
let dataProvider: CGDataProvider? = image?.dataProvider
let data: CFData? = dataProvider?.data
let baseAddress = CFDataGetBytePtr(data!)
contextImage = nil
let unmanagedData = Unmanaged<CFData>.passRetained(data!)
var pixelBuffer: CVPixelBuffer?
let status = CVPixelBufferCreateWithBytes(nil,
(image?.width)!,
(image?.height)!,
kCVPixelFormatType_32BGRA,
UnsafeMutableRawPointer( mutating: baseAddress!),
(image?.bytesPerRow)!,
{ releaseContext, baseAddress in
let contextData = Unmanaged<CFData>.fromOpaque(releaseContext!)
contextData.release()
},
unmanagedData.toOpaque(),
nil,
&pixelBuffer)
but I got stuck at the Unmanaged section and was not able to find the proper Objective-C way of doing that under ARC (the documentation of Unmanaged seems to exist only for Swift):
CGImageRef image = contextImage.CGImage;
CGDataProviderRef dataProvider = CGImageGetDataProvider(image);
CFDataRef data = CGDataProviderCopyData(dataProvider);
const UInt8 * baseAddress = CFDataGetBytePtr(data);
contextImage = nil;
// ... now what?
Eventually I accomplished it by integrating a Swift file into the Objective-C project but I still wonder, what is the proper way of porting that original Swift code in Objective-C?
I don't think you need Unmanaged. Even in the Swift documentation for this function RawPointers are used.
This is how the documentation looks like for the ObjC.
CVReturn CVPixelBufferCreateWithBytes (CFAllocatorRef allocator,
size_t width,
size_t height,
OSType pixelFormatType,
void *baseAddress,
size_t bytesPerRow,
CVPixelBufferReleaseBytesCallback releaseCallback,
void *releaseRefCon,
CFDictionaryRef pixelBufferAttributes,
CVPixelBufferRef *pixelBufferOut);
Its implementation could look something like this.
CVPixelBufferRef pixelBuffer = NULL;
cvErr = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
FRAME_WIDTH,
FRAME_HEIGHT,
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(imageData),
CGImageGetBytesPerRow(image),
NULL,
NULL,
NULL,
&pixelBuffer);
More in documentation for CVPixelBufferCreate.
I was pointed to this objc snippet from WWDC 14, but I work on a Swift project.
CMIOObjectPropertyAddress prop = {
kCMIOHardwarePropertyAllowScreenCaptureDevices,
kCMIOObjectPropertyScopeGlobal,
kCMIOObjectPropertyElementMaster
};
UInt32 allow = 1;
CMIOObjectSetPropertyData(kCMIOObjectSystemObject, &prop, 0, NULL, sizeof(allow), &allow);
I then tried rewriting to Swift:
var prop : CMIOObjectPropertyAddress {
kCMIOHardwarePropertyAllowScreenCaptureDevices
kCMIOObjectPropertyScopeGlobal
kCMIOObjectPropertyElementMaster
}
var allow:UInt32 = 1
CMIOObjectSetPropertyData(kCMIOObjectSystemObject, &prop, 0, nil, sizeof(UInt32), &allow)
But it doesn't even validate. I don't know how to translate the CMIOObjectPropertyAddress struct. Xcode says
/Users/mortenjust/Dropbox/hack/learning/screenrec/screenrec/deleteme.swift:32:61:
Cannot assign to a get-only property 'prop'
A C struct translates as a Swift struct. Use the implicit memberwise initializer:
var prop = CMIOObjectPropertyAddress(
mSelector: UInt32(kCMIOHardwarePropertyAllowScreenCaptureDevices),
mScope: UInt32(kCMIOObjectPropertyScopeGlobal),
mElement: UInt32(kCMIOObjectPropertyElementMaster))
The cool part is when you type CMIOObjectPropertyAddress(, code completion gives you the rest.
You're right, just got it running right this second. Turns out I also had to correct for some of the types. Here's the complete translation:
var prop = CMIOObjectPropertyAddress(
mSelector: CMIOObjectPropertySelector(kCMIOHardwarePropertyAllowScreenCaptureDevices),
mScope: CMIOObjectPropertyScope(kCMIOObjectPropertyScopeGlobal),
mElement: CMIOObjectPropertyElement(kCMIOObjectPropertyElementMaster))
var allow : UInt32 = 1
var dataSize : UInt32 = 4
var zero : UInt32 = 0
CMIOObjectSetPropertyData(CMIOObjectID(kCMIOObjectSystemObject), &prop, zero, nil, dataSize, &allow)
var session = AVCaptureSession()
session.sessionPreset = AVCaptureSessionPresetHigh
var devices = AVCaptureDevice.devices()
for device in AVCaptureDevice.devices() {
println(device)
}
Maybe I'm the first person doing this in Swift but there seems to be nothing on the net using &/inout together with uint8_t in Swift. Could someone translate this please? Is this relationship bitwise?
Objective-C
uint8_t *buf=(uint8_t *) CVPixelBufferGetBaseAddress(cvimgRef);
Swift attempt
let inout buf:uint8_t = SOMETHING HERE CVPixelBufferGetBaseAddress(cvimgRef)
CVPixelBufferGetBaseAddress() returns a UnsafeMutablePointer<Void>,
which can be converted to an UInt8 pointer via
let buf = UnsafeMutablePointer<UInt8>(CVPixelBufferGetBaseAddress(pixelBuffer))
Update for Swift 3 (Xcode 8), Checked for Swift 5 (Xcode 11):
if let baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer) {
let buf = baseAddress.assumingMemoryBound(to: UInt8.self)
// `buf` is `UnsafeMutablePointer<UInt8>`
} else {
// `baseAddress` is `nil`
}
I'm trying to capture a window list in a Mac OS X app using Swift. The CGWindowListCreateImageFromArray function requires a CFArray. I've tried several things and this is the closest I've got. Or is there a better way to convert the array?
import Cocoa
// Example swift array of CGWindowID's
var windowIDs = [CGWindowID]();
windowIDs.append(1);
windowIDs.append(2);
// Convert to CFArray using CFArrayCreate
let allocator = kCFAllocatorDefault
let numValues = windowIDs.count as CFIndex
let callbacks: UnsafePointer<CFArrayCallBacks> = nil
var values: UnsafeMutablePointer<UnsafePointer<Void>> = nil
/* how do I convert windowIDs to UnsafeMutablePointer<UnsafePointer<Void>> for the values? */
let windowIDsCFArray = CFArrayCreate(allocator, values, numValues, callbacks);
let capture = CGWindowListCreateImageFromArray(CGRectInfinite, windowIDsCFArray, CGWindowImageOption(kCGWindowListOptionOnScreenOnly));
You can initialize your UnsafeMutablePointer with your array so long as you set your CGWindowIDs to CFTypeRef:
var windows: [CFTypeRef] = [1, 2]
var windowsPointer = UnsafeMutablePointer<UnsafePointer<Void>>(windows)
var cfArray = CFArrayCreate(nil, windowsPointer, windows.count, nil)
Converted Ian's answer to Swift 4:
let windows = [CGWindowID(17), CGWindowID(50), CGWindowID(59)]
let pointer = UnsafeMutablePointer<UnsafeRawPointer?>.allocate(capacity: windows.count)
for (index, window) in windows.enumerated() {
pointer[index] = UnsafeRawPointer(bitPattern: UInt(window))
}
let array: CFArray = CFArrayCreate(kCFAllocatorDefault, pointer, windows.count, nil)
let capture = CGImage(windowListFromArrayScreenBounds: CGRect.infinite, windowArray: array, imageOption: [])!
let image: NSImage = NSImage(cgImage: capture, size: NSSize.zero)
Swift.print(image)
Arrays in Swift are bridged to NSArray, given they contain objects, e.g., conform to [AnyObject] type. Since CGWindowID is a UInt32, you need to convert it to NS family, array's map() method is an elegant approach.
var windows: [CGWindowID] = [CGWindowID(1), CGWindowID(2)]
var array: CFArray = windows.map({NSNumber(unsignedInt: $0)}) as CFArray
This, however, doesn't reflect on the actual CGWindowListCreateImageFromArray problem. Here's the working solution for that:
let windows: [CGWindowID] = [CGWindowID(17), CGWindowID(50), CGWindowID(59)]
let pointer: UnsafeMutablePointer<UnsafePointer<Void>> = UnsafeMutablePointer<UnsafePointer<Void>>.alloc(windows.count)
for var i: Int = 0, n = windows.count; i < n; i++ {
pointer[i] = UnsafePointer<Void>(bitPattern: UInt(windows[i]))
}
let array: CFArray = CFArrayCreate(kCFAllocatorDefault, pointer, windows.count, nil)
let capture: CGImage = CGWindowListCreateImageFromArray(CGRectInfinite, array, CGWindowImageOption.Default)!
let image: NSImage = NSImage(CGImage: capture, size: NSZeroSize)
Swift.print(image) // <NSImage 0x7f83a3d16920 Size={1440, 900} Reps=("<NSCGImageSnapshotRep:0x7f83a3d2dea0 cgImage=<CGImage 0x7f83a3d16840>>")>
I'm not great at ObjC, please correct if wrong, but from what I understand by playing with the SonOfGrab example and particular piece of code below is that the final pointer structure contains window ids (UInt32) not inside the memory cell (memory property of UnsafePointer instance), but inside memory address (hashValue property).
const void *windowIDs[2];
windowIDs[0] = 10;
windowIDs[1] = 20;
It's interesting, since values aren't stored in the memory, but inside the address descriptors, with oldest architectures being 32-bit UInt32 values fit perfectly into address pointers. Perhaps back in the days when the memory was a limiting factor this made a lot of sense and was a great approach. Discovering this all night in Swift in 2016 made me suicidal.
What's worse it fails in Xcode 7.2 playground with certain window ids, probably because of the way it handles memory, but works in the actual app.
I'm working on getting audio into the iPhone in a form where I can pass it to a (C++) analysis algorithm. There are, of course, many options: the AudioQueue tutorial at trailsinthesand gets things started.
The audio callback, though, gives an AudioQueueRef, and I'm finding Apple's documentation thin on this side of things. Built-in methods to write to a file, but nothing where you actually peer inside the packets to see the data.
I need data. I don't want to write anything to a file, which is what all the tutorials — and even Apple's convenience I/O objects — seem to be aiming at. Apple's AVAudioRecorder (infuriatingly) will give you levels and write the data, but not actually give you access to it. Unless I'm missing something...
How to do this? In the code below there is inBuffer->mAudioData which is tantalizingly close but I can find no information about what format this 'data' is in or how to access it.
AudioQueue Callback:
void AudioInputCallback(void *inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumberPacketDescriptions,
const AudioStreamPacketDescription *inPacketDescs)
{
static int count = 0;
RecordState* recordState = (RecordState*)inUserData;
AudioQueueEnqueueBuffer(recordState->queue, inBuffer, 0, NULL);
++count;
printf("Got buffer %d\n", count);
}
And the code to write the audio to a file:
OSStatus status = AudioFileWritePackets(recordState->audioFile,
false,
inBuffer->mAudioDataByteSize,
inPacketDescs,
recordState->currentPacket,
&inNumberPacketDescriptions,
inBuffer->mAudioData); // THIS! This is what I want to look inside of.
if(status == 0)
{
recordState->currentPacket += inNumberPacketDescriptions;
}
// so you don't have to hunt them all down when you decide to switch to float:
#define AUDIO_DATA_TYPE_FORMAT SInt16
// the actual sample-grabbing code:
int sampleCount = inBuffer->mAudioDataBytesCapacity / sizeof(AUDIO_DATA_TYPE_FORMAT);
AUDIO_DATA_TYPE_FORMAT *samples = (AUDIO_DATA_TYPE_FORMAT*)inBuffer->mAudioData;
Then you have your (in this case SInt16) array samples which you can access from samples[0] to samples[sampleCount-1].
The above solution did not work for me, I was getting the wrong sample data itself.(an endian issue) If incase someone is getting wrong sample data in future, I hope this helps you :
-(void)feedSamplesToEngine:(UInt32)audioDataBytesCapacity audioData:(void *)audioData {
int sampleCount = audioDataBytesCapacity / sizeof(SAMPLE_TYPE);
SAMPLE_TYPE *samples = (SAMPLE_TYPE*)audioData;
//SAMPLE_TYPE *sample_le = (SAMPLE_TYPE *)malloc(sizeof(SAMPLE_TYPE)*sampleCount );//for swapping endians
std::string shorts;
double power = pow(2,10);
for(int i = 0; i < sampleCount; i++)
{
SAMPLE_TYPE sample_le = (0xff00 & (samples[i] << 8)) | (0x00ff & (samples[i] >> 8)) ; //Endianess issue
char dataInterim[30];
sprintf(dataInterim,"%f ", sample_le/power); // normalize it.
shorts.append(dataInterim);
}