How to represent ObjC enum AVAudioSessionPortOverride which has declaration of int and string using Dart ffi? - objective-c

I'm working on a cross platform sound API for Flutter.
We're trying to stop using Objective C/Swift for the iOS portion of the API and we're using Dart ffi as a replacement.
ffi(foreign function interface) allows dart to call into an Obj C API.
This means we need to create a dart library which wraps the Obj C audio library.
Whilst doing this we encountered the AVAudioSessionPortOverride enum which has two declarations; AVAudioSessionPortOverrideSpeaker = 'spkr' and AVAudioSessionPortOverrideNone = 0.
I'm confused as to what's going on here as one of these declarations is an int whilst the other is a string.
I note that AVAudioSessionPortOverride extends an NSUInteger so how is the string being handled. Is it somehow being converted to an int? if so any ideas on how I would do this in dart?
Here's what we have so far:
class AVAudioSessionPortOverride extends NSUInteger {
const AVAudioSessionPortOverride(int value) : super(value);
static AVAudioSessionPortOverride None = AVAudioSessionPortOverride(0);
static const AVAudioSessionPortOverride Speaker =
AVAudioSessionPortOverride('spkr');
}

'spkr' is in fact an int. See e.g. How to convert multi-character constant to integer in C? for an explanation of how this obscure feature in C works.
That said, if you look at the Swift representation of the PortOverride enum, you'll see this:
/// For use with overrideOutputAudioPort:error:
public enum PortOverride : UInt {
/// No override. Return audio routing to the default state for the current audio category.
case none = 0
/// Route audio output to speaker. Use this override with AVAudioSessionCategoryPlayAndRecord,
/// which by default routes the output to the receiver.
case speaker = 1936747378
}
Also, see https://developer.apple.com/documentation/avfoundation/avaudiosession/portoverride/speaker
Accordingly, 0 and 1936747378 are the values you should use.

Look at this
NSLog(#"spkr = %x s = %x p = %x k = %x r = %x", 'spkr', 's', 'p', 'k', 'r' );
Apple is doing everything your lecturer warned you against. You can get away with this since the string is 4 chars (bytes) long. If you make it longer you'll get a warning. The string gets converted to an int as illustrated in the code snippet above. You could reverse it by accessing the four bytes one by one and printing them as a character.
Spoiler - it will print
spkr = 73706b72 s = 73 p = 70 k = 6b r = 72

Related

Converting String using specific encoding to get just one character

I'm on this frustrating journey trying to get a specific character from a Swift string. I have an Objective-C function, something like
- ( NSString * ) doIt: ( char ) c
that I want to call from Swift.
This c is eventually passed to a C function in the back that does the weightlifting here but this function gets tripped over when c is or A0.
Now I have two questions (apologies SO).
I am trying to use different encodings, especially the ASCII variants, hoping one would convert (A0) to spcae (20 or dec 32). The verdict seems to be that I need to hardcode this but if there is a failsafe, non-hardcoded way I'd like to hear about it!
I am really struggling with the conversion itself. How do I access a specific character using a specific encoding in Swift?
a) I can use
s.utf8CString[ i ]
but then I am bound to UTF8.
b) I can use something like
let s = "\u{a0}"
let p = UnsafeMutablePointer < CChar >.allocate ( capacity : n )
defer
{
p.deallocate()
}
// Convert to ASCII
NSString ( string : s ).getCString ( p,
maxLength : n,
encoding : CFStringConvertEncodingToNSStringEncoding ( CFStringBuiltInEncodings.ASCII.rawValue ) )
// Hope for 32
let c = p[ i ]
but this seems overkill. The string is converted to NSString to apply the encoding and I need to allocate a pointer, all just to get a single character.
c) Here it seems Swift String's withCString is the man for the job, but I can not even get it to compile. Below is what Xcode's completion gives but even after fiddling with it for a long time I am still stuck.
// How do I use this
// ??
s.withCString ( encodedAs : _UnicodeEncoding.Protocol ) { ( UnsafePointer < FixedWidthInteger & UnsignedInteger > ) -> Result in
// ??
}
TIA
There are two withCString() methods: withCString(_:) calls the given closure with a pointer to the contents of the string, represented as a null-terminated sequence of UTF-8 code units. Example:
// An emulation of your Objective-C method.
func doit(_ c: CChar) {
print(c, terminator: " ")
}
let s = "a\u{A0}b"
s.withCString { ptr in
var p = ptr
while p.pointee != 0 {
doit(p.pointee)
p += 1
}
}
print()
// Output: 97 -62 -96 98
Here -62 -96 is the signed character representation of the UTF-8 sequence C2 A0 of the NO-BREAK SPACE character U+00A0.
If you just want to iterate over all UTF-8 characters of the string sequentially then you can simply use the .utf8 view. The (unsigned) UInt8 bytes must be converted to the corresponding (signed) CChar:
let s = "a\u{A0}b"
for c in s.utf8 {
doit(CChar(bitPattern: c))
}
print()
I am not aware of a method which transforms U+00A0 to a “normal” space character, so you have to do that manually. With
let s = "a\u{A0}b".replacingOccurrences(of: "\u{A0}", with: " ")
the output of the above program would be 97 32 98.
The withCString(encodedAs:_:) method calls the given closure with a pointer to the contents of the string, represented as a null-terminated sequence of code units. Example:
let s = "a\u{A0}b€"
s.withCString(encodedAs: UTF16.self) { ptr in
var p = ptr
while p.pointee != 0 {
print(p.pointee, terminator: " ")
p += 1
}
}
print()
// Output: 97 160 98 8364
This method is probably of limited use for your purpose because it can only be used with UTF8, UTF16 and UTF32.
For other encodings you can use the data(using:) method. It produces a Data value which is a sequence of UInt8 (an unsigned type). As above, these must be converted to the corresponding signed character:
let s = "a\u{A0}b"
if let data = s.data(using: .isoLatin1) {
data.forEach {
doit(CChar(bitPattern: $0))
}
}
print()
// Output: 97 -96 98
Of course this may fail if the string is not representable in the given encoding.

QR Code encode mode for short URLs

Usual URL shortening techniques use few characters of the usual URL-charset, because not need more. Typical short URL is http://domain/code, where code is a integer number. Suppose that I can use any base (base10, base16, base36, base62, etc.) to represent the number.
QR Code have many encoding modes, and we can optimize the QR Code (minimal version to obtain lowest density), so we can test pairs of baseX-modeY...
What is the best base-mode pair?
NOTES
A guess...
Two modes fit with the "URL shortening profile",
0010 - Alphanumeric encoding (11 bits per 2 characters)
0100- Byte encoding (8 bits per character)
My choice was "upper case base36" and Alphanumeric (that also encodes "/", ":", etc.), but not see any demonstration that it is always (for any URL-length) the best. There are some good Guide or Mathematical demonstration about this kind of optimization?
The ideal (perhaps impracticable)
There are another variation, "encoding modes can be mixed as needed within a QR symbol" (Wikipedia)... So, we can use also
HTTP://DOMAIN/ with Alphanumeric + change_mode + Numeric encoding (10 bits per 3 digits)
For long URLs (long integers), of course, this is the best solution (!), because use all charset, no loose... Is it?
The problem is that this kind of optimization (mixed mode) is not accessible in usual QRCode-image generators... it is practicable? There are one generator using correctally?
An alternative answer format
The (practicable) question is about best combination of base and mode, so we can express it as a (eg. Javascript) function,
function bestBaseMode(domain,number_range) {
var dom_len = domain.length;
var urlBase_len = dom_len+8; // 8 = "http://".length + "/".length;
var num_min = number_range[0];
var num_max = number_range[1];
// ... check optimal base and mode
return [base,mode];
}
Example-1: the domain is "bit.ly" and the code is a ISO3166-1-numeric country-code,
ranging from 4 to 894. So urlBase_len=14, num_min=4 and num_max=894.
Example-2: the domain is "postcode-resolver.org" and number_range parameter is the range of most frequent postal codes integer representations, for instance a statistically inferred range from ~999 to ~999999. So urlBase_len=27, num_min=999 and num_max=9999999.
Example-3: the domain is "my-example3.net" and number_range a double SHA-1 code, so a fixed length code with 40 bytes (2 concatenated hexadecimal 40 digits long numbers). So num_max=num_min=Math.pow(8,40).
Nobody want my bounty... I lost it, and now also need to do the work by myself ;-)
about the ideal
The goQR.me support reply the particular question about mixed encoding remembering that, unfortunately, it can't be used,
sorry, our api does not support mixed qr code encoding.
Even the standard may defined it. Real world QR code scanner apps
on mobile phone have tons of bugs, we would not recommend to rely
on this feature.
functional answer
This function show the answers in the console... It is a simplification and "brute force" solution.
/**
* Find the best base-mode pair for a short URL template as QR-Code.
* #param Msg for debug or report.
* #param domain the string of the internet domain
* #param digits10 the max. number of digits in a decimal representation
* #return array of objects with equivalent valid answers.
*/
function bestBaseMode(msg, domain,digits10) {
var commomBases= [2,8,10,16,36,60,62,64,124,248]; // your config
var dom_len = domain.length;
var urlBase_len = dom_len+8; // 8 = "http://".length + "/".length
var numb = parseFloat( "9".repeat(digits10) );
var scores = [];
var best = 99999;
for(i in commomBases) {
var b = commomBases[i];
// formula at http://math.stackexchange.com/a/335063
var digits = Math.floor(Math.log(numb) / Math.log(b)) + 1;
var mode = 'alpha';
var len = dom_len + digits;
var lost = 0;
if (b>36) {
mode = 'byte';
lost = parseInt( urlBase_len*0.25); // only 6 of 8 bits used at URL
}
var score = len+lost; // penalty
scores.push({BASE:b,MODE:mode,digits:digits,score:score});
if (score<best) best = score;
}
var r = [];
for(i in scores) {
if (scores[i].score==best) r.push(scores[i]);
}
return r;
}
Running the question examples:
var x = bestBaseMode("Example-1", "bit.ly",3);
console.log(JSON.stringify(x)) // "BASE":36,"MODE":"alpha","digits":2,"score":8
var x = bestBaseMode("Example-2", "postcode-resolver.org",7);
console.log(JSON.stringify(x)) // "BASE":36,"MODE":"alpha","digits":5,"score":26
var x = bestBaseMode("Example-3", "my-example3.net",97);
console.log(JSON.stringify(x)) // "BASE":248,"MODE":"byte","digits":41,"score":61

Expand parameter list in C

I am using a C library in my Objective C project. The C library offers the following function
void processData(...);
which can be used with 1, 2 or 3 parameters, where the first parameter is mandatory and can have different types int, double, long, float and the other two arguments are optional and have int and long values and can be in whatever order.
Examples of use of this function are:
int myInt = 2;
double myDouble = 1.23;
int dataQuality = 1;
long dataTimestamp= GET_NOW();
processData(myInt);
processData(myInt, dataQuality);
processData(myDouble, dataQuality, dataTimestamp);
processData(myDouble, dataTimestamp);
I need to make an Objetive C wrapper that uses DataType class to call processDatawith the correct parameters. The Data class has getters that allows to get the data type (first argument), its value and whether the second and third arguments have value and their value.
The problem is how to make this expansion? I think it must be done at compile time, and I think the only mechanism available in C to do so is macros. But I have never used them. The implementation should be something like this (the following is pseudocode, where the arguments list is evaluated at runtime, something that I guess should be replaced by macros in order to evaluate the arguments at compile time):
-(void) objetiveCProcessData: (Data) d {
argumentList = {}
switch (d.getDataType()) {
case INT_TYPE:
append(argumentList, d.getValueAsInt()); // <-- appends a value with type `int`
break;
case DOUBLE_TYPE:
append(argumentList, d.getValueAsDouble()); // <-- appends a value with type `double`
break;
...
}
if (d.hasQuality()) {
append(argumentList, d.getQuality());
}
if (d.hasTimeStamp()) {
append(argumentList, d.getTimestamp());
}
// Call to the C function with correct number and type of arguments
processData(argumentList);
}

Using C style unsigned char array and bitwise operators in Swift

I'm working on changing some Objective-C Code over to Swift, and I cannot figure out for the life of me how to take care of unsigned char arrays and bitwise operations in this specific instance of code.
Specifically, I'm working on converting the following Objective-C code (which deals with CoreBluetooth) to Swift:
unsigned char advertisementBytes[21] = {0};
[self.proximityUUID getUUIDBytes:(unsigned char *)&advertisementBytes];
advertisementBytes[16] = (unsigned char)(self.major >> 8);
advertisementBytes[17] = (unsigned char)(self.major & 255);
I've tried the following in Swift:
var advertisementBytes: CMutablePointer<CUnsignedChar>
self.proximityUUID.getUUIDBytes(advertisementBytes)
advertisementBytes[16] = (CUnsignedChar)(self.major >> 8)
The problems I'm running into are that getUUIDBytes in Swift seems to only take a CMutablePointer<CUnsignedChar> object as an argument, rather than an array of CUnsignedChars, so I have no idea how to do the later bitwise operations on advertisementBytes, as it seems it would need to be an unsignedChar array to do so.
Additionally, CMutablePointer<CUnsignedChar[21]> throws an error saying that fixed length arrays are not supported in CMutablePointers in Swift.
Could anyone please advise on potential work-arounds or solutions? Many thanks.
Have a look at Interacting with C APIs
Mostly this
C Mutable Pointers
When a function is declared as taking a CMutablePointer
argument, it can accept any of the following:
nil, which is passed as a null pointer
A CMutablePointer value
An in-out expression whose operand is a stored lvalue of type Type,
which is passed as the address of the lvalue
An in-out Type[] value,
which is passed as a pointer to the start of the array, and
lifetime-extended for the duration of the call
If you have declared a
function like this one:
SWIFT
func takesAMutablePointer(x: CMutablePointer<Float>) { /*...*/ } You
can call it in any of the following ways:
SWIFT
var x: Float = 0.0
var p: CMutablePointer<Float> = nil
var a: Float[] = [1.0, 2.0, 3.0]
takesAMutablePointer(nil)
takesAMutablePointer(p)
takesAMutablePointer(&x)
takesAMutablePointer(&a)
So you code becomes
var advertisementBytes = CUnsignedChar[]()
self.proximityUUID.getUUIDBytes(&advertisementBytes)
advertisementBytes[16] = CUnsignedChar(self.major >> 8)

c, obj c enum without tag or identifier

im learning cocos2d [open gl wrapper for objective C on iPhone], and now playing with sprites have found this in a example,
enum {
easySprite = 0x0000000a,
mediumSprite = 0x0000000b,
hardSprite = 0x0000000c,
backButton = 0x0000000d,
magneticSprite = 0x0000000e,
magneticSprite2 = 0x0000000f
};
...
-(id) init
{...
/second sprite
TSprite *med = [TSprite spriteWithFile:#"butonB.png"]; //blue
[med SetCanTrack:YES];
[self addChild: med z:1 tag:mediumSprite];
med.position=ccp(299,230);
[TSprite track:med];
so the variable defined in the enum is used in the tag name of the created sprite object,
but i don understand
why give values in hexa to the tags to use
the enum with out tags
as I knew this enum in obj C and C
typedef enum {
JPG,
PNG,
GIF,
PVR
} kImageType;
thanks!
Usually, when you are creating an enum, you want to use it as a type (variable, method parameters etc.).
In this case, it's just a way how to declare integer constants. Since thay don't want to use the enum as type, the name is not necessary.
Edit:
Hexadecimal numbers are commonly used when the integer is a binary mask. You won't see any operators like +,-,*,/ used with such a number, you'll see bitwise operators (!, &, |, ^).
Every digit in a hexadecimal number represents 4 bits. The whole number is a 32-bit integer and by writing it in hexadecimal in this case, you are saying that you are using only the last four bits and the other bits can be used for something else. This wouldn't be obvious from a decimal number.
Enums are automatically assigned values, incremented from 0 but you can assign your own values.
If you don't specify any values they will be starting from 0 as in:
typedef enum {
JPG,
PNG,
GIF,
PVR
} kImageType;
But you could assign them values:
typedef enum {
JPG = 0,
PNG = 1,
GIF = 2,
PVR = 3
} kImageType;
or even
typedef enum {
JPG = 100,
PNG = 0x01,
GIF = 100,
PVR = 0xff
} kImageType;
anything you want, repeating values are ok as well.
I'm not sure why they are given those specific values but they might have some meaning related to use.
Well, you seem to be working off a terrible example. :)
At least as far as enums are concerned. It's up to anyone to define the actual value of an enum entry, but there's no gain to use hex numbers and in particular there's no point in starting the hex numbers with a through f (10 to 15). The example will also work with this enum:
enum {
easySprite = 10,
mediumSprite,
hardSprite,
backButton,
magneticSprite,
magneticSprite2
};
And unless there's some point in having the enumeration start with value 10, it will probably work without specifying any concrete values.