Can a default constructed DDS topic type be published? - data-distribution-service

I am using OpenSplice DDS 6.4 OSS edition with C++ (the C++11 bindings). If I try to default-construct a topic instance and send it, perhaps modifying some of its fields, I get access violation exceptions somewhere in the guts of the writer. Is this a bug, or is it intended behaviour that the default constructed object is not valid?
The IDL I am using has a lot of unions in it, which I have a felling may be relevant.

The IDL to C++ language mapping says the following about the default initialization of unions
The default union constructor performs no application-visible initialization of the union. It does not initialize the discriminator, nor does it initialize any union members to a state useful to an application. (The implementation of the default constructor can do whatever type of initialization it wants to, but such initialization is implementation-dependent. No compliant application can count on a union ever being properly initialized by the default constructor alone.)
So it is not safe to construct a default initialized topic instance and send it on the wire.
Just as reference, the IDL to C++11 language mapping says the following
The default union constructor initializes the union. If there is a default case specified, the union is initialized to this default case. In case the union has an implicit default member it is initialized to that case. In all other cases it is initialized as empty. Assigning, copying, moving, and the destruction of default-constructed unions are safe.

Related

Changing a class variable from outside the class

Finally, when I managed to understand how to fix this, that is, how to change the value of an internal dynamic variable, the code has moved on and now it is declared in this way:
my int $is-win = Rakudo::Internals.IS-WIN;
This is a class variable declared inside class Encoding::Builtin. Makes all the sense in the world, since an OS is not something that changes during the lifetime of a variable. However, I need to test this code from other OS, so I would need to access that class variable and assign it a True value. Can I do that using the meta object protocol?
The concept of "class variable" doesn't exist in Perl 6.
The declaration being considered is of a lexical variable, and its lifetime is bound to the scope (bounded by curly braces) that it is declared within. It doesn't have any relationship with the class that's being declared, so there's no way to reach it through the MOP. (That the block in this question happens to be attached to a class declaration is incidental so far as lexical variables go.) Nor is it declared our, so it's not stored in the package either.
The only way a lexical can be accessed - aside from under a debugger - is if something inside of that lexical scope explicitly made it possible (for example, by acquiring a pseudo-package and storing it somewhere more widely visible, or by allowing EVAL of provided code). Neither is happening in this case, so the variable not possible to access.
Perl 6 is very strict about lexical scoping, and that's a very intentional part of the language design. It supports the user in understanding and refactoring the program, and the compiler in program analysis and optimization. Put another way, Perl 6 is a fairly static language when it comes to lexical things (and will likely come to do much more static analysis in future language versions), and a dynamic language when it comes to object things.

Does COM's put_XXX methods change to set_XXX in a .NET RCW

I have a COM component that has get_XXX and put_XXX methods inside it. I used it in a .NET project and a RCW was generated for it. I now see get_XXX and set_XXX methods and NOT the put_XXX one? Is that automatic or defined somewhere in IDL?
These are property accessor methods. A compiler that uses the COM server is expected to generate a call to get_Xxx() when the client program reads the property, put_Xxx() when it writes it. A special one that C# doesn't have at all is putref_Xxx(), used to unambiguously access an object instead of a value.
The normal translation performed by Tlbimp.exe is as a plain C# property. But that doesn't always work, C# is a lot more strict about what a property can look like:
The default property, the one that's annotated as DISPID_VALUE (dispid 0) must take a single argument to be compatible. This maps to the C# indexer property, the one that makes it look like you are indexing an array.
Any other property cannot take an argument, C# does not supported indexed properties other than the indexer.
C# does not have the equivalent of putref_Xxx(), the syntax ambiguity cannot occur in a C# program because of the previous two bullets. And the core reason that the C# team decided to put these restrictions in place, they greatly disliked ambiguity in the language.
So Tlbimp.exe is forced to deal with these restrictions, if the COM property accessors are not compatible then it must fall back to exposing them as plain methods instead of a property. With default names, they'll get the get_ and set_ prefixes. The latter one explains your question, they didn't pick put_ for an otherwise unclear reason.
Notable is that C# version 4 relaxed several of these restrictions, most of all to make interop with Office programs easier. Which was quite painful in earlier C# versions, to put it mildly. It extended the property syntax to lessen the pain, but only for COM interop. Very strongly recommended if you are still stuck on an old version of .NET, now is a good time to consider updating.
The properties themselves have to prefixes (put_ etc.), they have names, getter method, setter method, but no prefixes. Method table generated from type library receives prefixes to distinguish between getters and setters, hence the prefixes. Prefix string exactly depends on preference of the one who generates the names.
See also:
#pragma import attributes - raw_property_prefixes
By default, low-level propget, propput, and propputref methods are exposed by member functions named with prefixes of get_, put_, and putref_ respectively. These prefixes are compatible with the names used in the header files generated by MIDL.

Why does the JVM have both `invokespecial` and `invokestatic` opcodes?

Both instructions use static rather than dynamic dispatch. It seems like the only substantial difference is that invokespecial will always have, as its first argument, an object that is an instance of the class that the dispatched method belongs to. However, invokespecial does not actually put the object there; the compiler is the one responsible for making that happen by emitting the appropriate sequence of stack operations before emitting invokespecial. So replacing invokespecial with invokestatic should not affect the way the runtime stack / heap gets manipulated -- though I expect that it will cause a VerifyError for violating the spec.
I'm curious about the possible reasons behind making two distinct instructions that do essentially the same thing. I took a look at the source of the OpenJDK interpreter, and it seems like invokespecial and invokestatic are handled almost identically. Does having two separate instructions help the JIT compiler better optimize code, or does it help the classfile verifier prove some safety properties more efficiently? Or is this just a quirk in the JVM's design?
Disclaimer: It is hard to tell for sure since I never read an explicit Oracle statement about this, but I pretty much think this is the reason:
When you look at Java byte code, you could ask the same question about other instructions. Why would the verifier stop you when pushing two ints on the stack and treating them as a single long right after? (Try it, it will stop you.) You could argue that by allowing this, you could express the same logic with a smaller instruction set. (To go further with this argument, a byte cannot express too many instructions, the Java byte code set should therefore cut down wherever possible.)
Of course, in theory you would not need a byte code instruction for pushing ints and longs to the stack and you are right about the fact that you would not need two instructions for INVOKESPECIAL and INVOKESTATIC in order to express method invocations. A method is uniquely identified by its method descriptor (name and raw argument types) and you could not define both a static and a non-static method with an identical description within the same class. And in order to validate the byte code, the Java compiler must check whether the target method is static nevertheless.
Remark: This contradicts the answer of v6ak. However, a methods descriptor of a non-static method is not altered to include a reference to this.getClass(). The Java runtime could therefore always infer the appropriate method binding from the method descriptor for a hypothetical INVOKESMART instruction. See JVMS §4.3.3.
So much for the theory. However, the intentions that are expressed by both invocation types are quite different. And remember that Java byte code is supposed to be used by other tools than javac to create JVM applications, as well. With byte code, these tools produce something that is more similar to machine code than your Java source code. But it is still rather high level. For example, byte code still is verified and the byte code is automatically optimized when compiled to machine code. However, the byte code is an abstraction that intentionally contains some redundancy in order to make the meaning of the byte code more explicit. And just like the Java language uses different names for similar things to make the language more readable, the byte code instruction set contains some redundancy as well. And as another benefit, verification and byte code interpretation/compilation can speed up since a method's invocation type does not always need to be inferred but is explicitly stated in the byte code. This is desirable because verification, interpretation and compilation are done at runtime.
As a final anecdote, I should mention that a class's static initializer <clinit> was not flagged static before Java 5. In this context, the static invocation could also be inferred by the method's name but this would cause even more run time overhead.
There are the definitions:
http://docs.oracle.com/javase/specs/jvms/se5.0/html/Instructions2.doc6.html#invokestatic
http://docs.oracle.com/javase/specs/jvms/se5.0/html/Instructions2.doc6.html#invokespecial
There are significant differences. Say we want to design an invokesmart instruction, which would choose smartly between inkovestatic and invokespecial:
First, it would not be a problem to distinguish between static and virtual calls, since we can't have two methods with same name, same parameter types and same return type, even if one is static and second is virtual. JVM does not allow that (for a strange reason). Thanks raphw for noticing that.
First, what would invokesmart foo/Bar.baz(I)I mean? It may mean:
A static method call foo.Bar.baz that consumes int from operand stack and adds another int. // (int) -> (int)
An instance method call foo.Bar.baz that consumes foo.Bar and int from operand stack and adds int. // (foo.Bar, int) -> (int)
How would you choose from them? There may exist both methods.
We may try to solve it by requiring foo/Bar.baz(Lfoo/Bar;I) for the static call. However, we may have both public static int baz(Bar, int) and public int baz(int).
We may say that it does not matter and possibly disable such situation. (I don't think that it is a good idea, but just to imagine.) What would it mean?
If the method is static, there are probably no additional restrictions. On the other hand, if the method is not static, there are some restrictions: "Finally, if the resolved method is protected (§4.6), and it is either a member of the current class or a member of a superclass of the current class, then the class of objectref must be either the current class or a subclass of the current class."
There are some further differences, see the note about ACC_SUPER.
It would mean that all the referenced classes must be loaded before bytecode verification. I hope this is not necessary now, but I am not 100% sure.
So, it would mean very inconsistent behavior.

What is the use of reflection in Java/C# etc [duplicate]

This question already has answers here:
What is reflection and why is it useful?
(23 answers)
Closed 6 years ago.
I was just curious, why should we use reflection in the first place?
// Without reflection
Foo foo = new Foo();
foo.hello();
// With reflection
Class cls = Class.forName("Foo");
Object foo = cls.newInstance();
Method method = cls.getMethod("hello", null);
method.invoke(foo, null);
We can simply create an object and call the class's method, but why do the same using forName, newInstance and getMthod functions?
To make everything dynamic?
Simply put: because sometimes you don't know either the "Foo" or "hello" parts at compile time.
The vast majority of the time you do know this, so it's not worth using reflection. Just occasionally, however, you don't - and at that point, reflection is all you can turn to.
As an example, protocol buffers allows you to generate code which either contains full statically-typed code for reading and writing messages, or it generates just enough so that the rest can be done by reflection: in the reflection case, the load/save code has to get and set properties via reflection - it knows the names of the properties involved due to the message descriptor. This is much (much) slower but results in considerably less code being generated.
Another example would be dependency injection, where the names of the types used for the dependencies are often provided in configuration files: the DI framework then has to use reflection to construct all the components involved, finding constructors and/or properties along the way.
It is used whenever you (=your method/your class) doesn't know at compile time the type should instantiate or the method it should invoke.
Also, many frameworks use reflection to analyze and use your objects. For example:
hibernate/nhibernate (and any object-relational mapper) use reflection to inspect all the properties of your classes so that it is able to update them or use them when executing database operations
you may want to make it configurable which method of a user-defined class is executed by default by your application. The configured value is String, and you can get the target class, get the method that has the configured name, and invoke it, without knowing it at compile time.
parsing annotations is done by reflection
A typical usage is a plug-in mechanism, which supports classes (usually implementations of interfaces) that are unknown at compile time.
You can use reflection for automating any process that could usefully use a list of the object's methods and/or properties. If you've ever spent time writing code that does roughly the same thing on each of an object's fields in turn -- the obvious way of saving and loading data often works like that -- then that's something reflection could do for you automatically.
The most common applications are probably these three:
Serialization (see, e.g., .NET's XmlSerializer)
Generation of widgets for editing objects' properties (e.g., Xcode's Interface Builder, .NET's dialog designer)
Factories that create objects with arbitrary dependencies by examining the classes for constructors and supplying suitable objects on creation (e.g., any dependency injection framework)
Using reflection, you can very easily write configurations that detail methods/fields in text, and the framework using these can read a text description of the field and find the real corresponding field.
e.g. JXPath allows you to navigate objects like this:
//company[#name='Sun']/address
so JXPath will look for a method getCompany() (corresponding to company), a field in that called name etc.
You'll find this in lots of frameworks in Java e.g. JavaBeans, Spring etc.
It's useful for things like serialization and object-relational mapping. You can write a generic function to serialize an object by using reflection to get all of an object's properties. In C++, you'd have to write a separate function for every class.
I have used it in some validation classes before, where I passed a large, complex data structure in the constructor and then ran a zillion (couple hundred really) methods to check the validity of the data. All of my validation methods were private and returned booleans so I made one "validate" method you could call which used reflection to invoke all the private methods in the class than returned booleans.
This made the validate method more concise (didn't need to enumerate each little method) and garuanteed all the methods were being run (e.g. someone writes a new validation rule and forgets to call it in the main method).
After changing to use reflection I didn't notice any meaningful loss in performance, and the code was easier to maintain.
in addition to Jons answer, another usage is to be able to "dip your toe in the water" to test if a given facility is present in the JVM.
Under OS X a java application looks nicer if some Apple-provided classes are called. The easiest way to test if these classes are present, is to test with reflection first
some times you need to create a object of class on fly or from some other place not a java code (e.g jsp). at that time reflection is useful.

What's the purpose of noncreatable coclasses in IDL?

What is the reason for declaring noncreatable coclasses like the following in IDL?
[
uuid(uuidhere),
noncreatable
]
coclass CoClass {
[default] interface ICoClass;
};
I mean such class will not be registered to COM anyway. What's the reason to mention it in the IDL file and in the type library produced by compiling that IDL file?
noncreatable is good when you want to stop clients from instantiating the object with the default class factory yet still have a proper CLSID for logging, debugging &c; see an example at http://www.eggheadcafe.com/software/aspnet/29555436/noncreatable-atl-object.aspx of an issue which is properly resolved that way.
The noncreatable attribute is just a hint to the consumer of the object -- .Net and VB6, for example, when seeing this attribute, will not allow the client to create the object "the normal way", e.g. by calling New CoClass() [VB6].
However, the COM server's class factory is the definite authority for deciding whether it allows objects of given class to be created or not -- so in fact, it is possible that a class is marked noncreatable and yet, the class factory allows objects to be created. To avoid such situations, make sure that you update your class factory accordingly.
Mentioning noncreatable classes in the IDL is in fact optional. Note, however, that you get at least one benefit from including them anyway: midl will create CLSID_CoClass constants etc.