What is operation in enum type? - objective-c

What is:
NSStreamEventOpenCompleted = 1 << 0 , 1 << 1 , 1 << 2 , 1 << 3 , 1 << 4 ?
In the example below
typedef enum {
NSStreamEventNone = 0,
NSStreamEventOpenCompleted = 1 << 0,
NSStreamEventHasBytesAvailable = 1 << 1,
NSStreamEventHasSpaceAvailable = 1 << 2,
NSStreamEventErrorOccurred = 1 << 3,
NSStreamEventEndEncountered = 1 << 4
};

That's a bitwise shift operation. It is used so that you can set one or more flags from the enum. This answer has a good explanation: Why use the Bitwise-Shift operator for values in a C enum definition?
Basically, it's so that one integer can store multiple flags which can be checked with the binary AND operator. The enum values end up looking like this:
typedef enum {
NSStreamEventNone = 0, // 00000
NSStreamEventOpenCompleted = 1 << 0, // 00001
NSStreamEventHasBytesAvailable = 1 << 1, // 00010
NSStreamEventHasSpaceAvailable = 1 << 2, // 00100
NSStreamEventErrorOccurred = 1 << 3, // 01000
NSStreamEventEndEncountered = 1 << 4 // 10000
};
So you can say:
// Set two flags with the binary OR operator
int flags = NSStreamEventEndEncountered | NSStreamEventOpenCompleted // 10001
if (flags & NSStreamEventEndEncountered) // true
if (flags & NSStreamEventHasBytesAvailable) // false
If you didn't have the binary shift, the values could clash or overlap and the technique wouldn't work. You may also see enums get set to 0, 1, 2, 4, 8, 16, which is the same thing as the shift above.

Related

CGAL Cartesian grid

In my code, I organize objects into a regular Cartesian grid (such as 10x10). Often given a point, I need to test whether the point intersects grid and if so, which bins contain the point. I already have my own implementation but I don't like to hassle with precision issues.
So, does CGAL has a 2D regular Cartesian grid?
You can use CGAL::points_on_square_grid_2 to generate the grid points. CGAL kernels provide Kernel::CompareXY_2 functors, which you can use to figure out the exact location of your query point on the grid. For example you can sort your grid points and then use std::lower_bound followed by CGAL::orientation or CGAL::collinear on the appropriate elements of your range. You could also build an arrangement, but this would be an overkill.
Here is a sample code.
#include <CGAL/Exact_predicates_exact_constructions_kernel.h>
#include <CGAL/point_generators_2.h>
#include <CGAL/random_selection.h>
#include <CGAL/Polygon_2_algorithms.h>
using namespace CGAL;
using K= Exact_predicates_exact_constructions_kernel;
using Point =K::Point_2;
using Creator = Creator_uniform_2<double, Point>;
using Grid = std::vector<Point>;
const int gridSide = 3;
void locate_point (Point p, Grid grid);
int main ()
{
Grid points;
points_on_square_grid_2(gridSide * gridSide, gridSide * gridSide, std::back_inserter(points), Creator());
std::sort(points.begin(), points.end(), K::Less_xy_2());
std::cout << "Grid points:\n";
for (auto& p:points)
std::cout << p << '\n';
std::cout << "\ncorner points:\n";
Grid cornerPoints{points[0], points[gridSide - 1], points[gridSide * gridSide - 1],
points[gridSide * (gridSide - 1)]};
for (auto& p:cornerPoints)
std::cout << p << '\n';
std::cout << '\n';
Point p1{-8, -8};
Point p2{-10, 3};
Point p3{-9, -8};
Point p4{0, 4};
Point p5{1, 5};
locate_point(p1, points);
locate_point(p2, points);
locate_point(p3, points);
locate_point(p4, points);
locate_point(p5, points);
}
void locate_point (Point p, Grid grid)
{
if (grid.empty())
{
std::cout << "Point " << p << " not in grid";
return;
}
// check if point is in grid
Grid cornerPoints{grid[0], grid[gridSide - 1], grid[gridSide * gridSide - 1], grid[gridSide * (gridSide - 1)]};
auto point_is = CGAL::bounded_side_2(cornerPoints.begin(), cornerPoints.end(), p);
switch (point_is)
{
case CGAL::ON_UNBOUNDED_SIDE:
std::cout << "Point " << p << " not in grid\n";
return;
case CGAL::ON_BOUNDARY:
std::cout << "Point " << p << " on grid boundary\n";
return;
case CGAL::ON_BOUNDED_SIDE:
std::cout << "Point " << p << " is in grid\n";
}
auto f = std::lower_bound(grid.begin(), grid.end(), p, K::Less_xy_2());
auto g = std::find_if(f, grid.end(), [&p] (const Point& gridpoint)
{ return K::Less_y_2()(p, gridpoint); });
if (CGAL::collinear(p, *g, *(g - 1)))
{
std::cout << "Point " << p << " on grid side between points " << *(g - 1) << " and " << *g << '\n';
return;
}
std::cout << "Point " << p << " in bin whose upper right point is " << *g << '\n';
return;
}
Output:
Grid points:
-9 -9
-9 0
-9 9
0 -9
0 0
0 9
9 -9
9 0
9 9
corner points:
-9 -9
-9 9
9 9
9 -9
Point -8 -8 is in grid
Point -8 -8 in bin whose upper right point is 0 0
Point -10 3 not in grid
Point -9 -8 on grid boundary
Point 0 4 is in grid
Point 0 4 on grid side between points 0 0 and 0 9
Point 1 5 is in grid
Point 1 5 in bin whose upper right point is 9 9

Translating NSEvent from Objective-C to Lua

I'm currently trying to read the contents of an Apple-made plist file using Lua, so that I can use it's values for something.
The plist contains keyboard shortcuts, using a 'modifierMask'.
By testing one by one, I've determined that the below modifierMask values match the listed modifier keys - but I'm unsure of how exactly Apple is calculating the mask value:
-- modifierMask = 131072 (shift)
-- modifierMask = 262144 (control)
-- modifierMask = 524288 (option)
-- modifierMask = 1048576 (command)
-- modifierMask = 786432 (control + option)
-- modifierMask = 393216 (control + shift)
-- modifierMask = 1310720 (control + command)
-- modifierMask = 1572864 (option + command)
-- modifierMask = 655360 (shift + option)
-- modifierMask = 1179648 (command + shift)
-- modifierMask = 917504 (control + shift + option)
-- modifierMask = 1703936 (option + command + shift)
-- modifierMask = 1835008 (control + option + command)
Someone else has suggested that most likely the modifier masks match up to the NSEvent modifier flags, and supplied the following Objective-C example:
Modifier Flags
The following constants (except for NSDeviceIndependentModifierFlagsMask) represent device-independent bits found in event modifier flags:
Declaration
OBJECTIVE-C
enum {
NSAlphaShiftKeyMask = 1 << 16,
NSShiftKeyMask = 1 << 17,
NSControlKeyMask = 1 << 18,
NSAlternateKeyMask = 1 << 19,
NSCommandKeyMask = 1 << 20,
NSNumericPadKeyMask = 1 << 21,
NSHelpKeyMask = 1 << 22,
NSFunctionKeyMask = 1 << 23,
NSDeviceIndependentModifierFlagsMask = 0xffff0000U
};
This looks promising, however I know nothing about Objective-C, so I was just wondering if anyone could please help me translate these Objective-C declarations into something I can use within Lua? Basically I want to create a Lua function that inputs a modifierMask (i.e. '131072') and returns a result that tells me what that modifierMask means (i.e. 'shift'). Any ideas?
Thanks in advance!
Answered here:
maskToTable = function(value)
local modifiers = {
AlphaShift = 1 << 16,
Shift = 1 << 17,
Control = 1 << 18,
Alternate = 1 << 19,
Command = 1 << 20,
NumericPad = 1 << 21,
Help = 1 << 22,
Function = 1 << 23,
}
local answer = {}
for k, v in pairs(modifiers) do
if (value & v) == v then
table.insert(answer, k)
end
end
return answer
end

Definition of bit masks in objective-c

I am learning Objective-c and I can't understand what are BitMasks, can anyone help me to understand it please? And I also don't know what is the function of this operator << is.
Objective C is an existing of C, and it uses the same bitwise operators. Lets take UIRemoteNotificationType as an example:
UIRemoteNotificationTypeNone = 0,
UIRemoteNotificationTypeBadge = 1 << 0,
UIRemoteNotificationTypeSound = 1 << 1,
UIRemoteNotificationTypeAlert = 1 << 2,
UIRemoteNotificationTypeNewsstandContentAvailability = 1 << 3,
The << is the shift left operator and its function is obvious once you look at the binary form:
1 << 0 = 1 (decimal) = 0000001 (binary)
1 << 1 = 2 (decimal) = 0000010 (binary)
1 << 2 = 4 (decimal) = 0000100 (binary)
1 << 3 = 8 (decimal) = 0001000 (binary)
It shifts a specific pattern (the left operand) to the left, the 'length' of the shift is determined by the right operand. It works with other numbers than 1; 3 << 2 = 12 because 0000011 (binary) shifted two places is 0001100. Translated to normal mathematics, a << b = a * 2^b.
The specific use of this pattern is that it is very easy to check if a certain option is set. Suppose I want my application to send notifications with badges and alerts. I pass the value UIRemoteNotificationTypeBadge | UIRemoteNotificationTypeAlert to the API, which is
UIRemoteNotificationTypeBadge = 0000001
UIRemoteNotificationTypeAlert = 0000100
total = 0000101 |
(the | is the bitwise OR operator; for every bit, the result is 1 if one or both of the corresponding bit of the operands is 1).
The API can then check if the badge property is present with the & operator:
total = 0000101
UIRemoteNotificationTypeBadge = 0000001
result = 0000001 &
(the & is the bitwise AND operator; for every bit, the result is 1 if both of the corresponding bit of the operands is 1).
The result is non-zero, so the badge property is present. Let's do the same with the sound property:
total = 0000101
UIRemoteNotificationTypeSound = 0000010
result = 0000000 &
The result is zero, so the badge property is not present.

Best practice to use bit operations to set some flags

I would like to turn on/off 3 stars that represent a level of difficulty. I don't want to make usage of several if condition, would it be possible to do so by just using bitwise operation?
Let's say i have declared an enum like this:
enum
{
EASY = 0,
MODERATE,
CHALLENGING
} Difficulty;
I would like to find a bit operation that let me find which star to turn on or off:
e.g:
level 2 (challenging)
star 0 -> 1
star 1 -> 1
star 2 -> 1
level 1 (moderate)
star 0 -> 1
star 1 -> 1
star 2 -> 0
level 0 (easy)
star 0 -> 1
star 1 -> 0
star 2 -> 0
In the case if you want to have 3 bits to save your stars states, like instead of having three boolean flags, than you should do:
typedef enum
{
DifficultyEasy = 1 << 0,
DifficultyModerate = 1 << 1,
DifficultyChallenging = 1 << 2
} Difficulty;
Difficulty state = 0; // default
To set Easy:
state |= DifficultyEasy;
To add Challenging:
state |= DifficultyChallenging;
To reset Easy:
state &= ~DifficultyEasy;
To know is Challenging set:
BOOL isChallenging = DifficultyChallenging & state;
In the case somebody needs an explanation how it works:
1 << x means set x bit to 1 (from right);
// actually it means move 0b00000001 left by x, but I said 'set' to simplify it
1 << 5 = 0b00100000; 1 << 2 = 0b00000100; 1 << 0 = 0b00000001;
0b00001111 | 0b11000011 = 0b11001111 (0 | 0 = 0, 1 | 0 = 1, 1 | 1 = 1)
0b00001111 & 0b11000011 = 0b00000011 (0 & 0 = 0, 1 & 0 = 0, 1 & 1 = 1)
~0b00001111 = 0b11110000 (~0 = 1, ~1 = 0)
You would want to do something like this:
typedef enum Difficulty : NSUInteger
{
EASY = 1 << 0,
MODERATE = 1 << 1,
CHALLENGING = 1 << 2
} Difficulty;
And then to check it:
- (void) setStarsWithDifficulty:(Difficulty)diff
{
star0 = (diff & (EASY | MODERATE | CHALLENGING));
star1 = (diff & (MODERATE | CHALLENGING));
star2 = (diff & CHALLENGING);
}
Are you talking about something like:
star0 = 1
star1 = value & CHALLENGING || value & MODERATE
star2 = value & CHALLENGING
#define STAR0 1
#define STAR1 2
#define STAR2 4
#define EASY STAR0
#define MODERATE STAR1|STAR0
#define CHALLENGING STAR0|STAR1|STAR2
Detection a value d with and and compare against 0 will produce the required mapping, some of the above samples will give you the mapped value instead, take a look:
int d = EASY;
NSLog(#"Star 0 %d",(d&STAR0)!=0);
NSLog(#"Star 1 %d",(d&STAR1)!=0);
NSLog(#"Star 2 %d",(d&STAR2)!=0);
d=MODERATE;
NSLog(#"Star 0 %d",(d&STAR0)!=0);
NSLog(#"Star 1 %d",(d&STAR1)!=0);
NSLog(#"Star 2 %d",(d&STAR2)!=0);
d=CHALLENGING;
NSLog(#"Star 0 %d",(d&STAR0)!=0);
NSLog(#"Star 1 %d",(d&STAR1)!=0);
NSLog(#"Star 2 %d",(d&STAR2)!=0);

NSMakeRange - replaceBytesInRange Question

In the following code I get the expected results
- (int)getInt{
NSRange intRange = NSMakeRange(0,3);
char buffer[4];
[stream getBytes:buffer range:intRange];
[stream replaceBytesInRange:NSMakeRange(0, 3) withBytes:NULL length:0];
return (int) (
(((int)buffer[0] & 0xffff) << 24) |
(((int)buffer[1] & 0xffff) << 16) |
(((int)buffer[2] & 0xffff) << 8) |
((int)buffer[3] & 0xffff) );
}
If I change intRange to 0, 4 I get the expected results.
If I change replaceBytesInRange to 0, 4 I seem to loose an extra byte in the stream.
I'm okay with using 0,3 - but I'm wondering why this happens, because with 2 and 8 byte replacements I do not get this behavior.