Flatbuffers: can I change int field to struct with 1 int? - flatbuffers

Based on a very good approach for null fields proposed by the main contributor to flatbuffers:
https://github.com/google/flatbuffers/issues/333#issuecomment-155856289
The easiest way to get a null default for an integer field is to wrap
it in a struct. This will get you null if scalar isn't present. It
also doesn't take up any more space on the wire than a regular int.
struct myint { x:int; }
table mytable { scalar:myint; }enter code here
this will get you null if scalar isn't present. It also doesn't take
up any more space on the wire than a regular int.
Also based on the flatbuffers documentation:
https://google.github.io/flatbuffers/md__schemas.html
You can't change types of fields once they're used, with the exception of same-size data where a reinterpret_cast would give you a desirable result, e.g. you could change a uint to an int if no values in current data use the high bit yet.
My question is can I treat int as reinterpret_cast-able to myint?
In other words, if I start with just a simple int as a field, can I later on decide that I actually want this int to be nullable and change it to myint? I know that all values that used to be default value in the first int schema will be read as null in the myint schema and I am ok with that.
Of course the obvious follow up question is can I do the same thing for all scalar types?

Though that isn't explicitly documented, yes, int and myint are wire-format compatible (they are both stored inline). Like you say, you will lose any default value instances to become null.

Related

Golang SQL Scanning to Struct strangeness

Please imagine a table as such:
FirstName, LastName, Fee, Opt_Out
String String Int TinyInt(4)
And a struct as such:
type Dude struct {
Firstname string
Lastname string
Fee int
Opt_out int
}
To keep it short: I am using database/sql to scan into structs with no issue, except when it comes to that TinyInt.
err = rows.Scan(&firstname, &lastname, &fee, &opt_out)
After scanning, and before assigning values to my struct, I
fmt.PrintLn(Opt_out)
and they always return a zero value.
If I plug the query directly into the sql server, I get the correct ones and zero's I am expecting.
I have also tried changing the type of "Opt_out" in the struct to string, and attempted to cast via the query itself by doing a
IF(Opt_out=1,"yes","no")
and a similar thing happens, the query run in sql returns expected results, but golang returns empty values.
I am drawing a blank. Any ideas?
Jeez. Ok, this had nothing to do with the tinyint itself apparently.
The 'fee' column in the database had null values in it, the pre-existing query was set to replace null with empty strings.
IFNULL(fee,'')
It seems that the sql driver 'broke' very quietly on seeing those, and just gave up on processing everything afterwards.
My fix was ammending the query with IFNULL(fee,0) and everything popped back to life.
Thanks RayfenWindspear! I would not have given it the extra mile without your feedback.

wxWidgets - wxGrid - reading/writing non string cell values

I have a wxGrid to edit an array of numerical data.
I was wondering what's the best way to get non-string data in and out of the cells without going through the string to numeric conversion all the time.
I've used SetCellEditor() to control the data entry.
currently I use this:
// numeric value into cell
str.clear();
str << val1;
m_grid4->SetCellValue(row, col, str);
..
// read value from back into variable
val = atoi(m_grid4->GetCellValue(row, col));
Apart from the fact that atoi() is a bit ugly and a template function with a stringstream would be better, is there a way do get non-string values a bit better in and out of cells?
I was looking at the editors and renderers but can't figure it out.
If you worry about efficiency, you almost certainly should use a custom table class deriving from wxGridTableBase instead of using the default trivial wxGridStringTable implementation which stores everything as strings. Then, and much less importantly, if it makes sense in your case, you can use wxGridCellNumberRenderer which will call your table GetValueAsLong() method instead of GetValue() (which returns a string).
Both of those are demonstrated in wxGrid sample, notably look at BugsGridTable there.
Good luck!

How to tell if an identifier is being assigned or referenced? (FLEX/BISON)

So, I'm writing a language using flex/bison and I'm having difficulty with implementing identifiers, specifically when it comes to knowing when you're looking at an assignment or a reference,
for example:
1) A = 1+2
2) B + C (where B and C have already been assigned values)
Example one I can work out by returning an ID token from flex to bison, and just following a grammar that recognizes that 1+2 is an integer expression, putting A into the symbol table, and setting its value.
examples two and three are more difficult for me because: after going through my lexer, what's being returned in ex.2 to bison is "ID PLUS ID" -> I have a grammar that recognizes arithmetic expressions for numerical values, like INT PLUS INT (which would produce an INT), or DOUBLE MINUS INT (which would produce a DOUBLE). if I have "ID PLUS ID", how do I know what type the return value is?
Here's the best idea that I've come up with so far: When tokenizing, every time an ID comes up, I search for its value and type in the symbol table and switch out the ID token with its respective information; for example: while tokenizing, I come across B, which has a regex that matches it as being an ID. I look in my symbol table and see that it has a value of 51.2 and is a DOUBLE. So instead of returning ID, with a value of B to bison, I'm returning DOUBLE with a value of 51.2
I have two different solutions that contradict each other. Here's why: if I want to assign a value to an ID, I would say to my compiler A = 5. In this situation, if I'm using my previously described solution, What I'm going to get after everything is tokenized might be, INT ASGN INT, or STRING ASGN INT, etc... So, in this case, I would use the former solution, as opposed to the latter.
My question would be: what kind of logical device do I use to help my compiler know which solution to use?
NOTE: I didn't think it necessary to post source code to describe my conundrum, but I will if anyone could use it effectively as a reference to help me understand their input on this topic.
Thank you.
The usual way is to have a yacc/bison rule like:
expr: ID { $$ = lookupId($1); }
where the the lookupId function looks up a symbol in the symbol table and returns its type and value (or type and storage location if you're writing a compiler rather than a strict interpreter). Then, your other expr rules don't need to care whether their operands come from constants or symbols or other expressions:
expr: expr '+' expr { $$ = DoAddition($1, $3); }
The function DoAddition takes the types and values (or locations) for its two operands and either adds them, producing a result, or produces code to do the addition at run time.
If possible redesign your language so that the situation is unambiguous. This is why even Javascript has var.
Otherwise you're going to need to disambiguate via semantic rules, for example that the first use of an identifier is its declaration. I don't see what the problem is with your case (2): just generate the appropriate code. If B and C haven't been used yet, a value-reading use like this should be illegal, but that involves you in control flow analysis if taken to the Nth degree of accuracy, so you might prefer to assume initial values of zero.
In any case you can see that it's fundamentally a language design problem rather than a coding problem.

Enumerating Strings as bytes?

I was looking for a way to enumerate String types in (vb).NET, but .NET enums only accept numeric type values.
The first alternative I came across was to create a dictionary of my enum values and the string I want to return. This worked, but was hard to maintain because if you changed the enum you would have to remember to also change the dictionary.
The second alternative was to set field attributes on each enum member, and retrieve it using reflection. Surely enough this worked aswell and also solved the maintenance problem, but it uses reflection and I've always read that using reflection should be a last resort thing.
So I started thinking and I came up with this: every ASCII character can be represented as a hexadecimal value, and you can assign hexadecimal values to enum members.
You could get rid of the attributes, assign the hexadecimal values to the enum members. Then, when you need the text value, convert the value to a byte array and use System.Text.Encodings.ASCII.GetString(enumMemberBytes) to get the string value.
Now speaking out of experience, anything I come up with is usually either flawed or just plain wrong. What do you guys think about this approach? Is there any reason not to do it like that?
Thanks.
EDIT
As pointed out by David W, enum member values are limited in length, depending on the underlying type (integer by default). So yes, I believe my method works but you are limited to characters in the ASCII table, with a maximum length of 4 or 8 characters using integers or longs respectively.
The easiest way I have found to dynamically parse a String representation of an Enumeration into the actual Enumeration type was to do the following:
Private EnumObject
[Undefined]
ValueA
ValueB
End Enum
dim enumVal as EnumObject = DirectCast([Enum].Parse(GetType(EnumObject), "ValueA"), EnumObject)
This removes the need to maintain a dictionary and allows you to just handle strings instead of converting to an Int or a Long. This does use reflection, but I have not come across any issues as long as you catch and handle any exceptions with the String Parse.

simple question about assigning float to int

This is probably something very simple but I'm not getting the results I'm expecting. I apologise if it's a stupid question, I just don't what to google for.
Easiest way to explain is with some code:
int var = 2.0*4.0;
NSLog(#"%d", 2.0*4.0);//1
NSLog(#"%d", var);//2
if ((2.0*4.0)!=0) {//3
NSLog(#"true");
}
if (var!=0) {//4
NSLog(#"true");
}
This produces the following output:
0 //1
8 //2
true //3
true //4
The one that I don't understand is line //1. Why are all the others converting (I'm assuming the correct word is "casting", please correct me if I'm wrong) the float into an int, but inside NSLog it's not happening. Does this have something to do with the string formatting %d parameter and it being fussy (for lack of a better word)?
You're telling NSLog that you're passing it an integer with the #"%d" format specifier, but you're not actually giving it an integer; you're giving it a double-precision floating-point value (8.0, as it happens). When you lie to NSLog, its behavior is undefined, and you get unexpected results like this.
Don't lie to NSLog. If you want to convert the result of 2.0*4.0 to an integer before printing, you need to do that explicitly:
NSLog(#"%d", (int)(2.0*4.0));
If, instead, you want to print the result of 2.0*4.0 as a double-precision floating-point number, you need to use a different format specifier:
NSLog(#"%g", 2.0*4.0);
More broadly, this is true of any function that takes a variable number of arguments and some format string to tell it how to interpret them. It's up to you to make sure that the data you pass it matches the corresponding format specifiers; implicit conversions will not happen for you.
First, you never used floats in your program. They are doubles.
Second, the arguments of NSLog, printf and the likes are not automatically converted to what you specify using %d or %f. It follows the standard promotion rule for untyped arguments. See the ISO specification, sec 6.5.2.2.6 and 6.5.2.2.7. Note the super weird rule that inside these functions,
a float is automatically promoted to double,
and any integer smaller than an int is promoted to int. (see 6.3.1.1.2)
So, strictly speaking, the specification %f is not showing a float, but a double. See the same document, Sec. 7.19.6.1.8.
Note also that in your case 1 and 3, promotions are to double.
In examples 2, 3 and 4, the float is either being assigned to an int (which converts it) or compared with an int (which also converts it). In 1, however, you're passing the float as an argument to a function. The printf function allows all the arguments after the initial format string to be of any type, so this is valid. But since the compiler doesn't know you mean for it to be an int (remember, you haven't done anything to let the compiler know), the float is passed along as a floating-point value. When printf sees the %d formatting specifier, it pops enough bytes for an int from the argument list and interprets those bytes as an int. Those bytes happen to look like an integer 0.
The format string %d expects a decimal number, meaning a base 10 integer, not a floating point. What you want there is %f if you're trying to get it to print out 8.0
The first parameter to NSLog is a format string, then the second (and subsequent) parameters can be any types. The compiler doesn't know what the types should be at compile time and so doesn't try to cast them to anything. At run time NSLog assumes the second (and subsequent) parameters are as specified in the format string. If there's a mismatch unexpected and generally unhappy things happen.
Summary; Make sure you pass variables of the right type in the second (and subsequent) parameter.