Count the number of bitwise values - objective-c

I am facing a simple problem by i have found a solution to it :).
if we have for example this :
typedef NS_OPTIONS(NSUInteger, UIViewAutoresizing) {
UIViewAutoresizingNone = 0,
UIViewAutoresizingFlexibleLeftMargin = 1 << 0,
UIViewAutoresizingFlexibleWidth = 1 << 1,
UIViewAutoresizingFlexibleRightMargin = 1 << 2,
UIViewAutoresizingFlexibleTopMargin = 1 << 3,
UIViewAutoresizingFlexibleHeight = 1 << 4,
UIViewAutoresizingFlexibleBottomMargin = 1 << 5
};
and a property like this :
#property (nonatomic, assign) UIViewAutoresizing autoresizingMask;
and this :
self.autoresizingMask = UIViewAutoresizingFlexibleRightMargin | UIViewAutoresizingFlexibleWidth ;
The questions is : How to know the number of items (UIViewAutoresizing values) in the autoresizingMask property ? ( in my example i have 2)

There is the __builtin_popcount function, which usually translates to a single instruction on most modern hardware. It basically gives you the number of set bits in an integer.

It's not very common to count the number of options, usually you just ask for every option you are interested in separately.
In this case however you can just traverse all 32 (64) bits and check how many bits are set to 1 (naive solution).
For more advanced solutions see: How to count the number of set bits in a 32-bit integer?
Note this works only if every option is specified by exactly 1 bit. I still advice to check for every option separately.
if ((self.autoresizingMask & UIViewAutoresizingFlexibleRightMargin) != 0) {
//do something
}

There are two ways to do this.
You either check inclusion one by one, and then either keep track of which ones or how many.
Or, you create a set of masks that cover all combinations and test against those for an exact match.
(More boilerplate code, increasing with number of enum options)

Related

Objective-C NSIndexSet / NSArray - Selecting the "Best" Index from Set using Standard Dev

I have a question now about using standard deviation.
And if I'm using it properly for my case as laid out below.
The Indexes are all Unique
here's a few questions I have about Standard Deviation:
1) Since I'm using all of the data should I be using a population Standard Dev or
should I use a sample Standard Dev?
2) Does it matter what the length (range) of the full playlist is (1...15)
I have a program which takes a Playlist of Songs and gets recommendations
for each song from Spotify.
Say the playlist has a length of 15.
Each tracks gets an a array of Suggestions of about 30 tracks.
And in the end my program will filter down all of the suggestions to
create a new playlist of only 15 tracks.
There is often duplicates that get recommended.
I have devised a method for finding these duplicates and
then putting their index into a NSIndexSet.
In my example there is a duplicate track that was suggested for tracks
in the original playlist at indexes 4, 6, 7, 12
I'm trying to calculate out which is the best one of the duplicates to pick.
All of the NSSet methods etc were not going to help me an would not take
into account "where" the duplicates where place. To me it makes sense
that the more ofter within a "zone" a track was suggested would make the
most sense to "use" it at that location in the final suggested playist.
Originally I was just selecting the index closest to the mean (7.25)
But to me I would think that 6 would be a better choice than 7.
The 12 seems to throw it off.
So I started to investigating StdDev and figured that could help me out
How do you think my approach to this here is?
NSMutableIndexSet* dupeIndexsSet; // contains indexes 4,6,7,12
// I have an extension on NSIndexSet to create a NSArray from it
NSArray* dupesIndexSetArray = [dupeIndexsSet indexSetAsArray];
// #[4, 6, 7, 12]
NSUInteger dupeIndexsCount = [dupeIndexSetArray count]; // 4
NSUInteger dupeIndexSetFirst = [dupeIndexsSet firstIndex]; // 4
NSUInteger dupeIndexSetLast = [dupeIndexsSet lastIndex]; // 12
// I have an extension on NSArray to calculate the mean
NSNumber* dupeIndexsMean = [dupeIndexArray meanOf]; // 7.25;
the populationSD is 2.9475
the populationVariance is 8.6875
the sampleSD is 3.4034
the sampleVariance is 11.5833
Which SD should I use?
Or will it matter
I learned that the SD is the range from the Mean
so I figured I would calculate out what those values are.
double mean = [dupeIndexsMean doubleValue];
double dev = [populationSD doubleValue];
NSUInteger stdDevRangeStart = MAX(round(mean - dev), dupeIndexSetFirst);
// 7.25 - 2.8475 = 4.4025, round 4, dupeIndexSetFirst = 4
// stdDevRangeStart = 4;
NSUInteger stdDevRangeEnd = MIN(round(mean + dev), dupeIndexSetLast);
// 7.25 + 2.8475 = 10.0975, round 10, dupeIndexSetLast = 12
// stdDevRangeEnd = 10;
NSUInteger stdDevRangeLength1 = stdDevRangeEnd - stdDevRangeStart; // 6
NSUInteger stdDevRangeLength2 = MAX(round(dev * 2), stdDevRangeLength1);
// 2.8475 * 2 = 5.695, round 6, stdDevRangeLength1 = 6
// stdDevRangeLength2 = 6;
NSRange dupeStdDevRange = NSMakeRange(stdDevRangeStart, stdDevRangeLength2);
// startIndex = 4, length 6
So I figured if this new range would give me a better range that
would include a more accurate stdDev and not include the 12.
I create a newIndexSet from the original one that only includes the indexes
that are included from my new dupeStdDevRange
NSMutableIndexSet* stdDevIndexSet = [NSMutableIndexSet new];
[dupeIndexsSet enumerateIndexesInRange:dupeStdDevRange
options:NSEnumerationConcurrent
usingBlock:^(NSUInteger idx, BOOL * _Nonnull stop)
{
[stdDevIndexSet addIndex:idx];
}];
// stdDevIndexSet has indexes = 4, 6, 7
the new stdDevIndexSet now includes indexes 4,6,7
12 was not included, which is great cause I thought was throwing everything off
now with this new "stdDevIndexSet" I check it against the original IndexSet
If the stdDevIndexSet count is less, then I reload this new indexSet into
the whole process and calculate everything again.
if ([stdDevIndexSet count] < dupeIndexesCount)
{
[self loadDupesIndexSet:stdDevIndexSet];
}
else {
doneTrim = YES;
}
So it is different, so I start the whole process again with index set that
includes 4,6,7
updated calculations
dupeIndexsMean = 5.6667;
populationSD = 1.2472;
populationVariance = 1.5556;
sampleSD = 1.5275;
sampleVariance = 2.3333;
stdDevRangeStart = 4;
stdDevRangeEnd = 7;
The newTrimmed IndexSet now "fits" the Stand Deviation Range.
So if I use the new Mean rounded to 6.
My Best Index to use is 6 from the original (4, 6, 7, 12)
Which now makes send to me.
So big question am I approaching this correctly?
Do things like the original Size (length) of the "potential" range matter?
IE if the original playlist length was 20 tracks as compared to 40 tracks?
(I'm thinking not).

How are bitwise operators being used in this code?

I was looking at PSPDFkit sample code and saw this:
NSDictionary *options = #{kPSPDFProcessorAnnotationTypes :
#(PSPDFAnnotationTypeNone & ~PSPDFAnnotationTypeLink)
};
The constants PSPDFAnnotationTypeNone and PSPDFAnnotationTypeLink are defined below:
// Available keys for options. kPSPDFProcessorAnnotationDict in
// form of pageIndex -> annotations.
// ..
extern NSString *const kPSPDFProcessorAnnotationTypes;
// Annotations defined after the PDF standard.
typedef NS_OPTIONS(NSUInteger, PSPDFAnnotationType) {
PSPDFAnnotationTypeNone = 0,
PSPDFAnnotationTypeLink = 1 << 1, // Links and multimedia extensions
PSPDFAnnotationTypeHighlight = 1 << 2, // (Highlight, Underline, StrikeOut) -
PSPDFAnnotationTypeText = 1 << 3, // FreeText
PSPDFAnnotationTypeInk = 1 << 4,
PSPDFAnnotationTypeShape = 1 << 5, // Square, Circle
PSPDFAnnotationTypeLine = 1 << 6,
PSPDFAnnotationTypeNote = 1 << 7,
PSPDFAnnotationTypeStamp = 1 << 8,
PSPDFAnnotationTypeRichMedia = 1 << 10, // Embedded PDF videos
PSPDFAnnotationTypeScreen = 1 << 11, // Embedded PDF videos
PSPDFAnnotationTypeUndefined = 1 << 31, // any annotation whose type not recognized
PSPDFAnnotationTypeAll = UINT_MAX
};
I understand that ~ is the bitwise not operator and & the bitwise and operator, but what is the purpose of their application in this code?
NSDictionary *options = #{kPSPDFProcessorAnnotationTypes :
#(PSPDFAnnotationTypeNone & ~PSPDFAnnotationTypeLink)
};
Based on comments below, the above could have been written simply as
NSDictionary *options = #{kPSPDFProcessorAnnotationTypes :#(PSPDFAnnotationTypeNone)};
Since it is the same as (0 & ~2) => 0. What's the point of adding the & ~PSPDFAnnotationTypeLink part?
"~" is the bitwise not-operator.
As "&" the bitwise and.
These are usually used for bitmask (like in your example) or other binary operations (as the name lets suggest). More info on wiki - Operators in C and C++.
They are in no relationship to literals.
First of all, I don't know obj-c, only C, but I guess the '&' is 'bitwise AND' and the '~' is bitwise NOT.
It's the bitwise NOT operator (same as many C-based languages), which inverts all bits in the underlying value.
So, for example, the eight bit value 0x57 (binary 0101 0111) becomes 1010 1000 or 0xa8.
See here for a more complete description of the various bitwise operators.

NS_OPTIONS matches

I am trying to implement the following typedef
typedef NS_OPTIONS (NSInteger, MyCellCorners) {
MyCellCornerTopLeft,
MyCellCornerTopRight,
MyCellCornerBottomLeft,
MyCellCornerBottomRight,
};
and correctly assign a value with
MyCellCorners cellCorners = (MyCellCornerTopLeft | MyCellCornerTopRight);
when drawing my cell, how can I check which of the options match so I can correctly draw it.
Use bit masking:
typedef NS_OPTIONS (NSInteger, MyCellCorners) {
MyCellCornerTopLeft = 1 << 0,
MyCellCornerTopRight = 1 << 1,
MyCellCornerBottomLeft = 1 << 2,
MyCellCornerBottomRight = 1 << 3,
};
MyCellCorners cellCorners = MyCellCornerTopLeft | MyCellCornerTopRight;
if (cellCorners & MyCellCornerTopLeft) {
// top left corner set
}
if (etc...) {
}
The correct way to check for this value is to first bitwise AND the values and then check for equality to the required value.
MyCellCorners cellCorners = MyCellCornerTopLeft | MyCellCornerTopRight;
if ((cellCorners & MyCellCornerTopLeft) == MyCellCornerTopLeft) {
// top left corner set
}
The following reference explains why this is correct and provides other insights into enumerated types.
Reference: checking-for-a-value-in-a-bit-mask
I agree with NSWill. I recently had a similar issue with wrong comparison.
The right if statement should be:
if ((cellCorners & MyCellCornerTopLeft) == MyCellCornerTopLeft){

Bit shifting coding efficiency (i.e. neat tricks)

I'm working on a Objective-C program where I'm getting bitfields over the network, and need to set boolean variables based on those bits.
Currently I'm representing the bitfields as int's, and then using bit shifting, similar to this (all the self properties are BOOL):
typedef enum {
deleteFlagIndex = 0,
uploadFlagIndex = 1,
moveFlagIndex = 2,
renameFlagIndex = 3
} PrivilegeFlagIndex;
int userFlag = 5; //for example
// this would be YES
self.delete = ((userFlag & (1 << deleteFlagIndex)) == (1 << deleteFlagIndex));
// this would be NO
self.upload = ((userFlag & (1 << uploadFlagIndex)) == (1 << uploadFlagIndex));
And this works (to the best of my knowledge) but I'm wondering - is there a more efficient concise way to code all the bit twiddling using a fancy trick/hack? I ask because I'll be doing this for a lot of flags (more than 30).
I did realize I could use this method this as well:
self.move = ((userFlag & (1 << moveFlagIndex)) > 0)
...which does reduce the amount of typing, but I don't know if there's a good reason to not do it that way.
Edit: Revised to say concise rather than efficient - I wasn't worried about execution performance, but rather tips and best practices for doing this in a smart way.
Try:
typedef enum {
deleteFlag = 1 << 0,
uploadFlag = 1 << 1,
moveFlag = 1 << 2,
renameFlag = 1 << 3
} PrivilegeFlags;
Now you can combine them using | directly.
Usually, it suffices to check against 0:
if (userFlag & deleteFlag) {
// delete...
}
You may want to try to use bitfields and let the compiler do the optimization itself.

How to use enums with bit flags

I have an enum declaration using bit flags and I cant exactly figure out on how to use this.
enum
{
kWhite = 0,
kBlue = 1 << 0,
kRed = 1 << 1,
kYellow = 1 << 2,
kBrown = 1 << 3,
};
typedef char ColorType;
I suppose to store multiple colors in one colorType I should OR the bits together?
ColorType pinkColor = kWhite | kRed;
But suppose I would want to check if pinkColor contains kRed, how would I do this?
Anyone care to give me an example using the provided ColorType example ?
Yes, use bitwise OR (|) to set multiple flags:
ColorType pinkColor = kWhite | kRed;
Then use bitwise AND (&) to test if a flag is set:
if ( pinkColor & kRed )
{
// do something
}
The result of & has any bit set only if the same bit is set in both operands. Since the only bit in kRed is bit 1, the result will be 0 if the other operand doesn't have this bit set too.
If you need to get whether a particular flag is set as a BOOL rather than just testing it in an if condition immediately, compare the result of the bitwise AND to the tested bit:
BOOL hasRed = ((pinkColor & kRed) == kRed);