RedisGraph: Specifing Integer type of values ? Max int? - redisgraph

Is there a way to specify which specific type of integer a property can use : int16, uint32 .. ?
or is it just NUMBER ??
Second : which is the biggest integer value that we can use in RedisGraph ?

Currently, all integers are stored as 64-bit signed integers, so the max size will always be INT64_MAX. The largest value is theoretically implementation-defined, but on all the systems I'm familiar with this resolves to 0x7fffffffffffffff, or 9,223,372,036,854,775,807.
Since RedisGraph does not use a schema to enforce the types of properties (a.val can be an integer on one node and a string on another), values are stored in a 16-byte struct with type data, so being able to specify smaller integer types would not result in space savings.

Related

How to convert Double (or Float) to Byte (or Short) in Kotlin 1.4?

In Kotlin version 1.4 the functions toByte() and toShort() are missing for Float and Double data types. How to convert those to Short or Byte?
As the official docs state:
Conversions of floating-point numbers to Short and Byte could lead to
unexpected results because of the narrow value range and smaller
variable size.
So if you want to convert to Byte or Short, you should do two steps: first convert to Int (with toInt()) and then to the target type (e.g. toShort()).
For instance: myVar.toInt().toByte()

Flatbuffers: can I change int field to struct with 1 int?

Based on a very good approach for null fields proposed by the main contributor to flatbuffers:
https://github.com/google/flatbuffers/issues/333#issuecomment-155856289
The easiest way to get a null default for an integer field is to wrap
it in a struct. This will get you null if scalar isn't present. It
also doesn't take up any more space on the wire than a regular int.
struct myint { x:int; }
table mytable { scalar:myint; }enter code here
this will get you null if scalar isn't present. It also doesn't take
up any more space on the wire than a regular int.
Also based on the flatbuffers documentation:
https://google.github.io/flatbuffers/md__schemas.html
You can't change types of fields once they're used, with the exception of same-size data where a reinterpret_cast would give you a desirable result, e.g. you could change a uint to an int if no values in current data use the high bit yet.
My question is can I treat int as reinterpret_cast-able to myint?
In other words, if I start with just a simple int as a field, can I later on decide that I actually want this int to be nullable and change it to myint? I know that all values that used to be default value in the first int schema will be read as null in the myint schema and I am ok with that.
Of course the obvious follow up question is can I do the same thing for all scalar types?
Though that isn't explicitly documented, yes, int and myint are wire-format compatible (they are both stored inline). Like you say, you will lose any default value instances to become null.

Convert an alphanumeric string to integer format

I need to store an alphanumeric string in an integer column on one of my models.
I have tried:
#result.each do |i|
hex_id = []
i["id"].split(//).each{|c| hex_id.push(c.hex)}
hex_id = hex_id.join
...
Model.create(:origin_id => hex_id)
...
end
When I run this in the console using puts hex_id in place of the create line, it returns the correct values, however the above code results in the origin_id being set to "2147483647" for every instance. An example string input is "t6gnk3pp86gg4sboh5oin5vr40" so that doesn't make any sense to me.
Can anyone tell me what is going wrong here or suggest a better way to store a string like the aforementioned example as a unique integer?
Thanks.
Answering by request form OP
It seems that the hex_id.join operation does not concatenate strings in this case but instead sums or performs binary complement of the hex values. The issue could also be that hex_id is an array of hex-es rather than a string, or char array. Nevertheless, what seems to happen is reaching the maximum positive value for the integer type 2147483647. Still, I was unable to find any documented effects on array.join applied on a hex array, it appears it is not concatenation of the elements.
On the other hand, the desired result 060003008600401100500050040 is too large to be recorded as an integer either. A better approach would be to keep it as a string, or use different algorithm for producing a number form the original string. Perhaps aggregating the hex values by an arithmetic operation will do better than join ?

Enumerating Strings as bytes?

I was looking for a way to enumerate String types in (vb).NET, but .NET enums only accept numeric type values.
The first alternative I came across was to create a dictionary of my enum values and the string I want to return. This worked, but was hard to maintain because if you changed the enum you would have to remember to also change the dictionary.
The second alternative was to set field attributes on each enum member, and retrieve it using reflection. Surely enough this worked aswell and also solved the maintenance problem, but it uses reflection and I've always read that using reflection should be a last resort thing.
So I started thinking and I came up with this: every ASCII character can be represented as a hexadecimal value, and you can assign hexadecimal values to enum members.
You could get rid of the attributes, assign the hexadecimal values to the enum members. Then, when you need the text value, convert the value to a byte array and use System.Text.Encodings.ASCII.GetString(enumMemberBytes) to get the string value.
Now speaking out of experience, anything I come up with is usually either flawed or just plain wrong. What do you guys think about this approach? Is there any reason not to do it like that?
Thanks.
EDIT
As pointed out by David W, enum member values are limited in length, depending on the underlying type (integer by default). So yes, I believe my method works but you are limited to characters in the ASCII table, with a maximum length of 4 or 8 characters using integers or longs respectively.
The easiest way I have found to dynamically parse a String representation of an Enumeration into the actual Enumeration type was to do the following:
Private EnumObject
[Undefined]
ValueA
ValueB
End Enum
dim enumVal as EnumObject = DirectCast([Enum].Parse(GetType(EnumObject), "ValueA"), EnumObject)
This removes the need to maintain a dictionary and allows you to just handle strings instead of converting to an Int or a Long. This does use reflection, but I have not come across any issues as long as you catch and handle any exceptions with the String Parse.

Store negative integer in Core Data

I can properly assign and retrieve a positive integer to an attribute of a managed object model instance. However, assigning a negative integer to this attribute records the number "4294967295" to my core data persistant store (an xml file). Thus, when the application reloads and the managed object is re-instantiated, the attribute reads "4294967295".
This attribute is specified in my DataModel as type Integer 32 and has a "Min Value" of "-12". I'm guessing this has something to do with storing negative integers as strings. This code produces the same "4294967295":
NSLog(#"Log -1: %u", -1);
=> "Log -1: 4294967295"
What's the proper way to store a negative integer in Core Data?
It's not a problem with Core Data, it's a problem with your format specifier. %u means that you want the argument formatted as an unsigned integer, which cannot be negative. Use %d or %i instead (these mean signed integers).