After reading through the Windows 8 app certification requirements, I was wondering why they state this:
•Must apply the FlagsAttribute to UInt32 enums.
•Must not apply the FlagsAttribute to Int32 enums.
What's the reasoning behind it?
The certification requirements can currently be found at http://msdn.microsoft.com/en-us/library/windows/apps/hh694083.aspx
There are two scenarios for the use of Enums in WinRT: As enumerated value constants and as bitfield value constants. The enumerated value form is represented as a signed integer (because it is enumerated) and the bitfield form is represented as an unsigned integer (to allow all 32 bits to be used for flags). All bitfield enums are required to have the FlagsAttribute.
This rule in the validation logic enforces that the underlying type of the enum is correct given the value of the FlagsAttribute.
This is important because some of the language projections will not correctly consume enums with the FlagsAttribute if the underlying type of the enum is signed.
Related
I noticed that the hashcode of Char-values is exactly the ID they have in ASCII, for example:
println('a'.hashCode()) //is 97
Is this true by contract and where can I see the implementation for this? The class Any.kt doesn't contain the implementation and Char.kt does neither.
I noticed that the hashcode of Char-values is exactly the ID they have in ASCII […]
That is impossible. ASCII only has 128 values, but Kotlin Char has 65536, so clearly, a Char cannot have their ASCII value as their hashcode, because 99.8% of them don't have an ASCII value.
Is this true by contract
No, it is not. The contract for kotlin.Char.hashCode() is:
fun hashCode(): Int
Returns a hash code value for the object. The general contract of hashCode is:
Whenever it is invoked on the same object more than once, the hashCode method must consistently return the same integer, provided no information used in equals comparisons on the object is modified.
If two objects are equal according to the equals() method, then calling the hashCode method on each of the two objects must produce the same integer result.
That is the whole contract. There is nothing about a relationship with ASCII.
and where can I see the implementation for this? The class Any.kt doesn't contain the implementation and Char.kt does neither.
I am assuming types like kotlin.Char or kotlin.Int are not actually implemented as Kotlin objects but as compiler intrinsics for performance reasons. For example, I would expect 42 to be a JVM int on the JVM platform and an ECMAScript number on the ECMAScript platform, and not implemented as a full-blown object with object header, instance variable table, class pointer, etc.
As it so happens, Kotlin's contract for hashCode() matches the contract for pretty much every other language as well, so I would expect that they as much as possible re-use the underlying platform's implementation. (In fact, I would suspect that is precisely the reason for designing the contract this way.)
Even for Kotlin/Native, it makes sense to map kotlin.Int to a native machine integer int_fast32_t or int32_t type.
I'm hoping someone could illustrate a common use case for the Microsoft Bond runtime schemas (SchemaDef). I understand these are used when schema definitions are not known at compile time, but if the shape of an object is fluid and changes frequently, what benefits might a runtime generated schema provide?
My use case is that the business user is in control of the shape of an object (via a rules engine). They could conceivably do all sorts of things that could break our backward compatibility (for example, invert the order of fields on the object). If we plan on persisting all the object versions that the user created, is there any way to manage backward/forward compatibility using Bond runtime schemas? I presume no, as if they invert from this:
0: int64 myInt;
1: string myString;
to this
0: string myString;
1: int64 myInt;
I'd expect a runtime error. Which implies managing the object with runtime schemas wouldn't provide much help to me.
What would be a usecase where a runtime schema would in fact be useful?
Thank you!
Some of the uses for runtime schemas are:
with the Simple Binary protocol to handle schema changes
schema validation/evoluton
rendering a struct in a GUI
custom mapping from one struct to another
Your case feels like schema validation, if you can pro-actively reject a schema that would no be compatible. I worked on a system that used Bond under the hood and took this approach. There was an explicit "change the schema of this entity" operation that validated whether the two schemas were compatible with each other.
I don't know the data flow in your system, so such validation might not be possible. In that case, you could use the runtime schemas, along with some rules provided by the business users, to convert between different shapes.
Simple Binary
When deserializing from Simple Binary, the reader must know the exact schema that the writer used, otherwise it has no way to interpret the bytes, resulting in potentially silent data corruption.
Such corruption can happen if the schema undergoes the following change:
// starting struct
struct Foo
{
0: uint8 f1;
1: uint16 f2;
}
The Simple Binary serialized representation of Foo { f1: 1, f2: 2} is 0x01 0x02 0x00.
Let's now change the schema to this:
// changed struct
struct Foo
{
0: uint8 f1;
// It's OK to remove an optional field.
// 1: uint16 f2;
2: uint8 f3;
3: uint8 f4;
}
If we deserialize 0x01 0x02 0x00 with this schema, we'll get Foo { f1: 1, f3: 2, f4: 0}. Notice that f3 is 2, which is not correct: it should be 0. With the runtime schema for the old Foo, the reader will know that the second and third bytes correspond to a field that has since been deleted and can skip them, resulting in the expected Foo { f1:1, f3: 0, f4: 0 }.
Schema Validation and Evolution
Some systems that use Bond have different rules for schema evolution that the normal Bond rules. Runtime schemas can be used to enforce such rules (e.g., checking a type to enforce a rule that no collections are used) before accepting structs of a given type or before registering such a schema in, say, a repository of known schemas.
You could also walk two schemas to determine with they are compatible with each other. It would be nice if Bond provided such an API itself, so that it doesn't have to be reimplemented again and again. I've opened a GitHub issue for such an API.
GUI
With a runtime schema, you have extra information about the struct, including things like the names of the fields. (The binary encoding protocols omit field names, relying, instead, on field IDs.) You can use this additional information to do things like create GUI controls specific to each field.
There's an example showing inspection of a runtime schema in both C# and C++.
Custom Mapping
In C++, the MapTo transform can be used to convert one struct to another, which incompatible shapes, given a set of rules. There's an example of this, that makes use of a runtime schema to derive the rules.
I'm looking for a quick way to serialize custom structures consisting of basic value types and strings.
Using C++CLI to pin the pointer of the structure instance and destination array and then memcpy the data over is working quite well for all the value types. However, if I include any reference types such as string then all I get is the reference address.
Expected as much since otherwise it would be impossible for the structure to have a fixed.. structure. I figured that maybe, if I make the string fixed size, it might place it inside the structure though. Adding < VBFixedString(256) > to the string declaration did not achieve that.
Is there anything else that would place the actual data inside the structure?
Pinning a managed object and memcpy-ing the content will never give you what you want. Any managed object, be it String, a character array, or anything else will show up as a reference, and you'll just get a memory location.
If I read between the lines, it sounds like you need to call some C or C++ (not C++/CLI) code, and pass it a C struct that looks similar to this:
struct UnmanagedFoo
{
int a_number;
char a_string[256];
};
If that's the case, then I'd solve this by setting up the automatic marshaling to handle this for you. Here's how you'd define that struct so that it marshals properly. (I'm using C# syntax here, but it should be an easy conversion to VB.net syntax.)
[StructLayout(LayoutKind.Sequential, CharSet=CharSet.Ansi)]
public struct ManagedFoo
{
public int a_number;
[MarshalAs(UnmanagedType.ByValTStr, SizeConst=256)]
public string a_string;
}
Explanation:
StructLayout(LayoutKind.Sequential) specifies that the fields should be in the declared order. The default LayoutKind, Auto, allows the fields to be re-ordered if the compiler wants.
CharSet=CharSet.Ansi specifies the type of strings to marshal. You can specify CharSet.Ansi to get char strings on the C++ side, or CharSet.Unicode to get wchar_t strings in C++.
MarshalAs(UnmanagedType.ByValTStr) specifies a string inline to the struct, which is what you were asking about. There are several other string types, with different semantics, see the UnmanagedType page on MSDN for descriptions.
SizeConst=256 specifies the size of the character array. Note that this specifies the number of characters (or when doing arrays, number of array elements), not the number of bytes.
Now, these marshal attributes are an instruction to the built-in marshaler in .Net, which you can call directly from your VB.Net code. To use it, call Marshal.StructureToPtr to go from the .Net object to unmanaged memory, and Marshal.PtrToStructure to go from unmanaged memory to a .Net object. MSDN has some good examples of calling those two methods, take a look at the linked pages.
Wait, what about C++/CLI? Yes, you could use C++/CLI to marshal from the .Net object to a C struct. If your structs get too complex to represent with the MarshalAs attribute, it's highly appropriate to do that. In that case, here's what you do: Declare your .Net struct like I listed above, without the MarshalAs or StructLayout. Also declare the C struct, plain and ordinary, also as listed above. When you need to switch from one to the other, copy things field by field, not a big memcpy. Yes, all the fields that are basic types (integers, doubles, etc.) will be a repetitive output.a_number = input.a_number, but that's the proper way to do it.
I am working on the data access in C++ ATL/COM.
How do you pass a nullable type (e.g. nullable integer) in an interface???
In ATL/C++ in in interfaces (IDL) you don't have nullable classes and support in language constructs (<type>? in C#). Nullable is basically the type itself and an extra BOOL indicating whether we currently have NULL or not.
One can implement a relatively simple template class to look - to extend possible - similarly to C# Nullable. On the interface this will be either two arguments, or as you discovered you can use VARIANT type since it already embeds payload value and .vt member indicating type. VT_NULL constant is what it says for itslef - the value of the whole variant is null.
What is the difference between
typedef enum {
...
} Name;
and
enum {
...
};
typedef NSUInteger Name;
? If functionality is the same, what is the second form good for? Isn't it unnecessarily messy?
enum is as old as C, therefore a part of Objective-C.
It is just explicit coding of an int type. It's quite useful for debugging and most newer compilers can make optimizations based on it. (Which you should totally ignore). It's most useful in making your code more readable (to anyone else, or to yourself after you've slept).
typedef enum {
...
} NameType ;
would be followed by
NameType name;
and that's typically the preferred style of a typedef,
your second example will not tie the typedef to the values you want to specify should only be of the given type.
Note that this does not prevent you from executing
name = 10244; // some non-valid value not listed in the enumeration
but some compilers might generate a warning in that case,
I ran across Apple's use of the following today:
enum {
NSFetchedResultsChangeInsert = 1,
NSFetchedResultsChangeDelete = 2,
NSFetchedResultsChangeMove = 3,
NSFetchedResultsChangeUpdate = 4
};
typedef NSUInteger NSFetchedResultsChangeType;
They do this because they really want the NSFetchedResultsChangeType to be of the type they have defined as NSUInteger. This can be an int but it can also be something else. And with values of 1, 2, 3, and 4, it's somewhat irrelevant to us what the type is. But they are coding to a different level of abstraction because they are a tools provider.
You should never look to Apple for coding style hints. If you see something that looks like it's cleaner/better way to code, it usually is. As Kevin mentioned, API stability is of paramount importance for them.
Edit (Jan 2013) If you have access to the WWDC 2012 Session Videos, you should watch Session 405 - Modern Objective-C 6:00-10:00. There is discussion a new syntax in the newer compiler that allows explicit sizing of a type and tight bonding of values to types. (borrowed from C++ 11)
enum NSFetchedResultsChangeType : NSUInteger {
NSFetchedResultsChangeInsert = 1,
NSFetchedResultsChangeDelete = 2,
NSFetchedResultsChangeMove = 3,
NSFetchedResultsChangeUpdate = 4
};
The former defines a type name to refer to an enum. This is the way most enums are named in C. The latter is a bit different though, and it's prevalent in the Cocoa frameworks. There's two reasons to use the latter. The first is if your enum defines a bitfield, and you'd want it here because when you're providing a "Name" value you'll be providing a combination of the enum values. In other words, if you say something like
[self doSomethingWithBitfield:(Enum1 | Enum2)]
you're not passing a value of Name but rather an integer that's a combination of the two.
However, Cocoa frameworks use this idiom even for non-bitfield values, for a very good reason: API stability. According to the C standard, the underlying integral type of an enum is requires to be able to contain all values in the enum, but is otherwise chosen by the compiler. This means that adding a new enum value could change the integral type of the enum (e.g. adding -1 can make it signed, adding 6 billion can make it into a long long, etc). This is a bad thing from an API stability standpoint, because the type encoding of methods which take values of this enum could change unexpectedly and potentially break existing code and binaries. In order to prevent this, the Cocoa frameworks generally define the type as being an NSUInteger (or NSInteger if they need negative numbers), so the API and type encodings stay stable.