Store negative integer in Core Data - objective-c

I can properly assign and retrieve a positive integer to an attribute of a managed object model instance. However, assigning a negative integer to this attribute records the number "4294967295" to my core data persistant store (an xml file). Thus, when the application reloads and the managed object is re-instantiated, the attribute reads "4294967295".
This attribute is specified in my DataModel as type Integer 32 and has a "Min Value" of "-12". I'm guessing this has something to do with storing negative integers as strings. This code produces the same "4294967295":
NSLog(#"Log -1: %u", -1);
=> "Log -1: 4294967295"
What's the proper way to store a negative integer in Core Data?

It's not a problem with Core Data, it's a problem with your format specifier. %u means that you want the argument formatted as an unsigned integer, which cannot be negative. Use %d or %i instead (these mean signed integers).

Related

RedisGraph: Specifing Integer type of values ? Max int?

Is there a way to specify which specific type of integer a property can use : int16, uint32 .. ?
or is it just NUMBER ??
Second : which is the biggest integer value that we can use in RedisGraph ?
Currently, all integers are stored as 64-bit signed integers, so the max size will always be INT64_MAX. The largest value is theoretically implementation-defined, but on all the systems I'm familiar with this resolves to 0x7fffffffffffffff, or 9,223,372,036,854,775,807.
Since RedisGraph does not use a schema to enforce the types of properties (a.val can be an integer on one node and a string on another), values are stored in a 16-byte struct with type data, so being able to specify smaller integer types would not result in space savings.

Objective C: Parsing JSON string

I have a string data which I need to parse into a dictionary object. Here is my code:
NSString *barcode = [NSString stringWithString:#"{\"OTP\": 24923313, \"Person\": 100000000000112, \"Coupons\": [ 54900012445, 499030000003, 00000005662 ] }"];
NSLog(#"%#",[barcode objectFromJSONString]);
In this log, I get NULL result. But if I pass only one value in Coupons, I get the results. How to get all three values ?
00000005662 might not be a proper integer number as it's prefixed by zeroes (which means it's octal, IIRC). Try removing them.
Cyrille is right, here is the autoritative answer:
The application/json Media Type for JavaScript Object Notation (JSON): 2.4 Numbers
The representation of numbers is similar to that used in most programming languages. A number contains an integer component that may be prefixed with an optional minus sign, which may be followed by a fraction part and/or an exponent part.
Octal and hex forms are not allowed. Leading zeros are not allowed.

Enumerating Strings as bytes?

I was looking for a way to enumerate String types in (vb).NET, but .NET enums only accept numeric type values.
The first alternative I came across was to create a dictionary of my enum values and the string I want to return. This worked, but was hard to maintain because if you changed the enum you would have to remember to also change the dictionary.
The second alternative was to set field attributes on each enum member, and retrieve it using reflection. Surely enough this worked aswell and also solved the maintenance problem, but it uses reflection and I've always read that using reflection should be a last resort thing.
So I started thinking and I came up with this: every ASCII character can be represented as a hexadecimal value, and you can assign hexadecimal values to enum members.
You could get rid of the attributes, assign the hexadecimal values to the enum members. Then, when you need the text value, convert the value to a byte array and use System.Text.Encodings.ASCII.GetString(enumMemberBytes) to get the string value.
Now speaking out of experience, anything I come up with is usually either flawed or just plain wrong. What do you guys think about this approach? Is there any reason not to do it like that?
Thanks.
EDIT
As pointed out by David W, enum member values are limited in length, depending on the underlying type (integer by default). So yes, I believe my method works but you are limited to characters in the ASCII table, with a maximum length of 4 or 8 characters using integers or longs respectively.
The easiest way I have found to dynamically parse a String representation of an Enumeration into the actual Enumeration type was to do the following:
Private EnumObject
[Undefined]
ValueA
ValueB
End Enum
dim enumVal as EnumObject = DirectCast([Enum].Parse(GetType(EnumObject), "ValueA"), EnumObject)
This removes the need to maintain a dictionary and allows you to just handle strings instead of converting to an Int or a Long. This does use reflection, but I have not come across any issues as long as you catch and handle any exceptions with the String Parse.

Is it possible to create custom byte in .net

I am creating a WCF in vb.net inside VS 2010. I have a handful of properties that are currently bytes (0 - 255) and represent different test scores. Is it possible for me to create my own type based on this that will only allow values between 0 and 110? For example, if I have
Dim a as Byte
a = 256
I will get "Constant expression not representable in type 'Byte'." before the code is compiled. I want to have something like this for my own type so the below code would give me "Constant expression not representable in type 'myByte'."
Dim a as myByte
a = 110
You can only use predefined (native) types, as Byte, and implement some features, like overloading operators to check minimum and maximum values. However, not every operator can be overloaded, what, in this case, includes the assignement operator '='.
Check http://msdn.microsoft.com/en-us/library/8edha89s%28v=vs.71%29.aspx and the tutorials if it helps somewhat.
To assign a value tp your type you can make use of properties or methods that set the value checking for boudaries and other conditions, perfectly doable.
But to define it as a native... negative, sir.
Nope, I don't think that's possible. You'll have to use a constructor to initialize your myByte instance and do the range check at runtime (not sure how useful that would be).

simple question about assigning float to int

This is probably something very simple but I'm not getting the results I'm expecting. I apologise if it's a stupid question, I just don't what to google for.
Easiest way to explain is with some code:
int var = 2.0*4.0;
NSLog(#"%d", 2.0*4.0);//1
NSLog(#"%d", var);//2
if ((2.0*4.0)!=0) {//3
NSLog(#"true");
}
if (var!=0) {//4
NSLog(#"true");
}
This produces the following output:
0 //1
8 //2
true //3
true //4
The one that I don't understand is line //1. Why are all the others converting (I'm assuming the correct word is "casting", please correct me if I'm wrong) the float into an int, but inside NSLog it's not happening. Does this have something to do with the string formatting %d parameter and it being fussy (for lack of a better word)?
You're telling NSLog that you're passing it an integer with the #"%d" format specifier, but you're not actually giving it an integer; you're giving it a double-precision floating-point value (8.0, as it happens). When you lie to NSLog, its behavior is undefined, and you get unexpected results like this.
Don't lie to NSLog. If you want to convert the result of 2.0*4.0 to an integer before printing, you need to do that explicitly:
NSLog(#"%d", (int)(2.0*4.0));
If, instead, you want to print the result of 2.0*4.0 as a double-precision floating-point number, you need to use a different format specifier:
NSLog(#"%g", 2.0*4.0);
More broadly, this is true of any function that takes a variable number of arguments and some format string to tell it how to interpret them. It's up to you to make sure that the data you pass it matches the corresponding format specifiers; implicit conversions will not happen for you.
First, you never used floats in your program. They are doubles.
Second, the arguments of NSLog, printf and the likes are not automatically converted to what you specify using %d or %f. It follows the standard promotion rule for untyped arguments. See the ISO specification, sec 6.5.2.2.6 and 6.5.2.2.7. Note the super weird rule that inside these functions,
a float is automatically promoted to double,
and any integer smaller than an int is promoted to int. (see 6.3.1.1.2)
So, strictly speaking, the specification %f is not showing a float, but a double. See the same document, Sec. 7.19.6.1.8.
Note also that in your case 1 and 3, promotions are to double.
In examples 2, 3 and 4, the float is either being assigned to an int (which converts it) or compared with an int (which also converts it). In 1, however, you're passing the float as an argument to a function. The printf function allows all the arguments after the initial format string to be of any type, so this is valid. But since the compiler doesn't know you mean for it to be an int (remember, you haven't done anything to let the compiler know), the float is passed along as a floating-point value. When printf sees the %d formatting specifier, it pops enough bytes for an int from the argument list and interprets those bytes as an int. Those bytes happen to look like an integer 0.
The format string %d expects a decimal number, meaning a base 10 integer, not a floating point. What you want there is %f if you're trying to get it to print out 8.0
The first parameter to NSLog is a format string, then the second (and subsequent) parameters can be any types. The compiler doesn't know what the types should be at compile time and so doesn't try to cast them to anything. At run time NSLog assumes the second (and subsequent) parameters are as specified in the format string. If there's a mismatch unexpected and generally unhappy things happen.
Summary; Make sure you pass variables of the right type in the second (and subsequent) parameter.