Translating NSEvent from Objective-C to Lua - objective-c

I'm currently trying to read the contents of an Apple-made plist file using Lua, so that I can use it's values for something.
The plist contains keyboard shortcuts, using a 'modifierMask'.
By testing one by one, I've determined that the below modifierMask values match the listed modifier keys - but I'm unsure of how exactly Apple is calculating the mask value:
-- modifierMask = 131072 (shift)
-- modifierMask = 262144 (control)
-- modifierMask = 524288 (option)
-- modifierMask = 1048576 (command)
-- modifierMask = 786432 (control + option)
-- modifierMask = 393216 (control + shift)
-- modifierMask = 1310720 (control + command)
-- modifierMask = 1572864 (option + command)
-- modifierMask = 655360 (shift + option)
-- modifierMask = 1179648 (command + shift)
-- modifierMask = 917504 (control + shift + option)
-- modifierMask = 1703936 (option + command + shift)
-- modifierMask = 1835008 (control + option + command)
Someone else has suggested that most likely the modifier masks match up to the NSEvent modifier flags, and supplied the following Objective-C example:
Modifier Flags
The following constants (except for NSDeviceIndependentModifierFlagsMask) represent device-independent bits found in event modifier flags:
Declaration
OBJECTIVE-C
enum {
NSAlphaShiftKeyMask = 1 << 16,
NSShiftKeyMask = 1 << 17,
NSControlKeyMask = 1 << 18,
NSAlternateKeyMask = 1 << 19,
NSCommandKeyMask = 1 << 20,
NSNumericPadKeyMask = 1 << 21,
NSHelpKeyMask = 1 << 22,
NSFunctionKeyMask = 1 << 23,
NSDeviceIndependentModifierFlagsMask = 0xffff0000U
};
This looks promising, however I know nothing about Objective-C, so I was just wondering if anyone could please help me translate these Objective-C declarations into something I can use within Lua? Basically I want to create a Lua function that inputs a modifierMask (i.e. '131072') and returns a result that tells me what that modifierMask means (i.e. 'shift'). Any ideas?
Thanks in advance!

Answered here:
maskToTable = function(value)
local modifiers = {
AlphaShift = 1 << 16,
Shift = 1 << 17,
Control = 1 << 18,
Alternate = 1 << 19,
Command = 1 << 20,
NumericPad = 1 << 21,
Help = 1 << 22,
Function = 1 << 23,
}
local answer = {}
for k, v in pairs(modifiers) do
if (value & v) == v then
table.insert(answer, k)
end
end
return answer
end

Related

What does mpn_invert_3by2 in mini-gmp do?

I really wonder the answer to this question. and I used python to calculate:
def inv(a):
return ((1 << 96) - 1) // (a << 32)
Why is python's result different from mpn_invert_limb's?
/* The 3/2 inverse is defined as
m = floor( (B^3-1) / (B u1 + u0)) - B
*/
B should be 2^32
And what is the use of mpn invert_limb?
Python code:
def inv(a):
return ((1 << 96) - 1) // (a << 32)
a = 165536
b = inv(a)
print(b & (2 ** 32 - 1))
C code:
int main()
{
mp_limb_t a = 16636;
mp_limb_t b;
b = mpn_invert_limb(a);
printf("a = %u, b = %u\n", a, b);
printf("a = %X, b = %X\n", a, b);
return 0;
}
Python output:
3522819686
C output:
a = 165536, b = 3165475657
a = 286A0, b = BCAD5349
Calling mpn_invert_limb only makes sense when your input is full-sized (has its high bit set). If the input isn't full sized then the quotient would be too big to fit in a single limb whereas in the full sized case its only 1 bit too big hence the subtraction of B in the definition.
I actually can't even run with your input of 16636, I get a division by 0 because this isn't even half a limb. Anyway, if I replace that value by a<<17 then I get a match between your Python and C. This shifting to make the top bit be set is what mini-gmp does in its usage of the function.

decoding base64 encoded text with POSIX awk

In a bash script that I'm writing for Linux/Solaris I need to decode more than a hundred thousand base64-encoded text strings, and, because I don't wanna massively fork a non-portable base64 binary from awk, I wrote a function that does the decoding.
Here's the code of my base64_decode function:
function base64_decode(str, out,i,n,v) {
out = ""
if ( ! ("A" in _BASE64_DECODE_c2i) )
for (i = 1; i <= 64; i++)
_BASE64_DECODE_c2i[substr("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",i,1)] = i-1
i = 0
n = length(str)
while (i <= n) {
v = _BASE64_DECODE_c2i[substr(str,++i,1)] * 262144 + \
_BASE64_DECODE_c2i[substr(str,++i,1)] * 4096 + \
_BASE64_DECODE_c2i[substr(str,++i,1)] * 64 + \
_BASE64_DECODE_c2i[substr(str,++i,1)]
out = out sprintf("%c%c%c", int(v/65536), int(v/256), v)
}
return out
}
Which works fine:
printf '%s\n' SmFuZQ== amRvZQ== |
LANG=C command -p awk '
{ print base64_decode($0) }
function base64_decode(...) {...}
'
Jane
jdoe
SIMPLIFIED REAL-LIFE EXAMPLE THAT DOESN'T WORK AS EXPECTED
I want to get the givenName of the users that are members of GroupCode = 025496 from the output of ldapsearch -LLL -o ldif-wrap=no ... '(|(uid=*)(GroupCode=*))' uid givenName sn GroupCode memberUid:
dn: uid=jsmith,ou=users,dc=example,dc=com
givenName: John
sn: SMITH
uid: jsmith
dn: uid=jdoe,ou=users,dc=example,dc=com
uid: jdoe
givenName:: SmFuZQ==
sn:: RE9F
dn: cn=group1,ou=groups,dc=example,dc=com
GroupCode: 025496
memberUid:: amRvZQ==
memberUid: jsmith
Here would be an awk for doing so:
LANG=C command -p awk -F '\n' -v RS='' -v GroupCode=025496 '
{
delete attrs
for (i = 2; i <= NF; i++) {
match($i,/::? /)
key = substr($i,1,RSTART-1)
val = substr($i,RSTART+RLENGTH)
if (RLENGTH == 3)
val = base64_decode(val)
attrs[key] = ((key in attrs) ? attrs[key] SUBSEP val : val)
}
if ( /\nuid:/ )
givenName[ attrs["uid"] ] = attrs["givenName"]
else
memberUid[ attrs["GroupCode"] ] = attrs["memberUid"]
}
END {
n = split(memberUid[GroupCode],uid,SUBSEP)
for ( i = 1; i <= n; i++ )
print givenName[ uid[i] ]
}
function base64_decode(...) { ... }
'
On BSD and Solaris the result is:
Jane
John
While on Linux it is:
John
I don't know where the issue might be; is there something wrong with the base64_decode function and/or the code that uses it?
Your function generates NUL bytes when its argument (encoded string) ends with padding characters (=s). Below is a corrected version of your while loop:
while (i < n) {
v = _BASE64_DECODE_c2i[substr(str,1+i,1)] * 262144 + \
_BASE64_DECODE_c2i[substr(str,2+i,1)] * 4096 + \
_BASE64_DECODE_c2i[substr(str,3+i,1)] * 64 + \
_BASE64_DECODE_c2i[substr(str,4+i,1)]
i += 4
if (v%256 != 0)
out = out sprintf("%c%c%c", int(v/65536), int(v/256), v)
else if (int(v/256)%256 != 0)
out = out sprintf("%c%c", int(v/65536), int(v/256))
else
out = out sprintf("%c", int(v/65536))
}
Note that if the decoded bytes contains an embedded NUL then this approach may not work properly.
Problem is within base64_decode function that outputs some junk characters on gnu-awk.
You can use this awk code that uses system provided base64 utility as an alternative:
{
delete attrs
for (i = 2; i <= NF; i++) {
match($i,/::? /)
key = substr($i,1,RSTART-1)
val = substr($i,RSTART+RLENGTH)
if (RLENGTH == 3) {
cmd = "echo " val " | base64 -di"
cmd | getline val # should also check exit code here
}
attrs[key] = ((key in attrs) ? attrs[key] SUBSEP val : val)
}
if ( /\nuid:/ )
givenName[ attrs["uid"] ] = attrs["givenName"]
else
memberUid[ attrs["GroupCode"] ] = attrs["memberUid"]
}
END {
n = split(memberUid[GroupCode],uid,SUBSEP)
for ( i = 1; i <= n; i++ )
print givenName[ uid[i] ]
}
I have tested this on gnu and BSD awk versions and I am getting expected output in all the cases.
If you cannot use external base64 utility then I suggest you take a look here for awk version of base64 decode.
This answer is for reference
Here's a working base64_decode function (thanks #MNejatAydin for pointing out the issue(s) in the original one):
function base64_decode(str, out,bits,n,i,c1,c2,c3,c4) {
out = ""
# One-time initialization during the first execution
if ( ! ("A" in _BASE64) )
for (i = 1; i <= 64; i++)
# The "_BASE64" array associates a character to its base64 index
_BASE64[substr("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",i,1)] = i-1
# Decoding the input string
n = length(str)
i = 0
while ( i < n ) {
c1 = substr(str, ++i, 1)
c2 = substr(str, ++i, 1)
c3 = substr(str, ++i, 1)
c4 = substr(str, ++i, 1)
bits = _BASE64[c1] * 262144 + _BASE64[c2] * 4096 + _BASE64[c3] * 64 + _BASE64[c4]
if ( c4 != "=" )
out = out sprintf("%c%c%c", bits/65536, bits/256, bits)
else if ( c3 != "=" )
out = out sprintf("%c%c", bits/65536, bits/256)
else
out = out sprintf("%c", bits/65536)
}
return out
}
WARNING: the function requires LANG=C
It also doesn't check that the input is a valid base64 string; for that you can add a simple condition like:
match( str, "^([a-zA-Z/-9+]{4})*([a-zA-Z/-9+]{2}[a-zA-Z/-9+=]{2})?$" )
Interestingly, the code is 2x faster than base64decode.awk, but it's only 3x faster than forking the base64 binary from inside awk.
notes:
In a base64 encoded string, 4 bytes represent 3 bytes of data; the input have to be processed by groups of 4 characters.
Multiplying and dividing an integer by a power of two is equivalent to do bitwise left and right shifts operations.
262144 is 2^18, so N * 262144 is equivalent to N << 18
4096 is 2^12, so N * 4096 is equivalent to N << 12
64 id 2^6, so N * 4096 is equivalent to N << 6
65536 is 2^16, so N / 65536 (integer division) is equivalent to N >> 16
256 is 2^8, so N / 256 (integer division) is equivalent to N >> 8
What happens in printf "%c", N:
N is first converted to an integer (if need be) and then, WITH LANG=C, the 8 least significant bits are taken in for the %c formatting.
How the possible padding of one or two trailing = characters at the end of the encoded string is handled:
If the 4th char isn't = (i.e. there's no padding) then the result should be 3 bytes of data.
If the 4th char is = and the 3rd char isn't = then there's 2 bytes of of data to decode.
If the fourth char is = and the third char is = then there's only one byte of data.

What is operation in enum type?

What is:
NSStreamEventOpenCompleted = 1 << 0 , 1 << 1 , 1 << 2 , 1 << 3 , 1 << 4 ?
In the example below
typedef enum {
NSStreamEventNone = 0,
NSStreamEventOpenCompleted = 1 << 0,
NSStreamEventHasBytesAvailable = 1 << 1,
NSStreamEventHasSpaceAvailable = 1 << 2,
NSStreamEventErrorOccurred = 1 << 3,
NSStreamEventEndEncountered = 1 << 4
};
That's a bitwise shift operation. It is used so that you can set one or more flags from the enum. This answer has a good explanation: Why use the Bitwise-Shift operator for values in a C enum definition?
Basically, it's so that one integer can store multiple flags which can be checked with the binary AND operator. The enum values end up looking like this:
typedef enum {
NSStreamEventNone = 0, // 00000
NSStreamEventOpenCompleted = 1 << 0, // 00001
NSStreamEventHasBytesAvailable = 1 << 1, // 00010
NSStreamEventHasSpaceAvailable = 1 << 2, // 00100
NSStreamEventErrorOccurred = 1 << 3, // 01000
NSStreamEventEndEncountered = 1 << 4 // 10000
};
So you can say:
// Set two flags with the binary OR operator
int flags = NSStreamEventEndEncountered | NSStreamEventOpenCompleted // 10001
if (flags & NSStreamEventEndEncountered) // true
if (flags & NSStreamEventHasBytesAvailable) // false
If you didn't have the binary shift, the values could clash or overlap and the technique wouldn't work. You may also see enums get set to 0, 1, 2, 4, 8, 16, which is the same thing as the shift above.

Joining hex values to create a 16bit value

Im receiving three uint8 values which are the Most, Middle and Least Significant Digits of a plot value:
EG: Printed in console (%c):
1 A 4
I need to pass them into a signal view UI grapher which accepts a uint16_t. So far the way im doing it is not working correctly.
uint16_t iChanI = (bgp->iChanIH << 8) + (bgp->iChanIM <<4 ) + bgp->iChanIL;
uint16_t iChanQ = (bgp->iChanQH << 8) + (bgp->iChanQM <<4) + bgp->iChanQL;
[self updateSView:iChanI ichanQ:iChanQ];
Am i merging them correctly, or just adding the values?
Any help is much appreciated,
Thanks,
You first need to convert each hex character to its equivalent 4 bit (nybble) representation, and then merge them into an int16_t, e.g.
uint8_t to_nybble(char c)
{
return 'c' >= '0' && c <= '9' ? c - '0' : c - 'A' + 10;
}
uint16_t iChanI = (to_nybble(bgp->iChanIH) << 8) |
(to_nybble(bgp->iChanIM) << 4) |
to_nybble(bgp->iChanIL);

Separate signed int into bytes in NXC

Is there any way to convert a signed integer into an array of bytes in NXC? I can't use explicit type casting or pointers either, due to language limitations.
I've tried:
for(unsigned long i = 1; i <= 2; i++)
{
MM_mem[id.idx] = ((val & (0xFF << ((2 - i) * 8)))) >> ((2 - i) * 8));
id.idx++;
}
But it fails.
EDIT: This works... It just wasn't downloading. I've wasted about an hour trying to figure it out. >_>
EDIT: In NXC, >> is a arithmetic shift. int is a signed 16-bit integer type. A byte is the same thing as unsigned char.
NXC is 'Not eXactly C', a relative of C, but distinctly different from C.
How about
unsigned char b[4];
b[0] = (x & 0xFF000000) >> 24;
b[1] = (x & 0x00FF0000) >> 16;
b[2] = (x & 0x0000FF00) >> 8;
b[3] = x & 0xFF;
The best way to do this in NXC with the opcodes available in the underlying VM is to use FlattenVar to convert any type into a string (aka byte array with a null added at the end). It results in a single VM opcode operation where any of the above options which use shifts and logical ANDs and array operations will require dozens of lines of assembly language.
task main()
{
int x = Random(); // 16 bit random number - could be negative
string data;
data = FlattenVar(x); // convert type to byte array with trailing null
NumOut(0, LCD_LINE1, x);
for (int i=0; i < ArrayLen(data)-1; i++)
{
#ifdef __ENHANCED_FIRMWARE
TextOut(0, LCD_LINE2-8*i, FormatNum("0x%2.2x", data[i]));
#else
NumOut(0, LCD_LINE2-8*i, data[i]);
#endif
}
Wait(SEC_4);
}
The best way to get help with LEGO MINDSTORMS and the NXT and Not eXactly C is via the mindboards forums at http://forums.mindboards.net/
Question originally tagged c; this answer may not be applicable to Not eXactly C.
What is the problem with this:
int value;
char bytes[sizeof(int)];
bytes[0] = (value >> 0) & 0xFF;
bytes[1] = (value >> 8) & 0xFF;
bytes[2] = (value >> 16) & 0xFF;
bytes[3] = (value >> 24) & 0xFF;
You can regard it as an unrolled loop. The shift by zero could be omitted; the optimizer would certainly do so. Even though the result of right-shifting a negative value is not defined, there is no problem because this code only accesses the bits where the behaviour is defined.
This code gives the bytes in a little-endian order - the least-significant byte is in bytes[0]. Clearly, big-endian order is achieved by:
int value;
char bytes[sizeof(int)];
bytes[3] = (value >> 0) & 0xFF;
bytes[2] = (value >> 8) & 0xFF;
bytes[1] = (value >> 16) & 0xFF;
bytes[0] = (value >> 24) & 0xFF;