I'm new in building corba application. Presently I'm developping a corba application in java. The problem I have is that I should write a method that receive the name of the class, the method and the arguments to pass to the corba server as a string.
Before invoking the remote method, I have to parse the string and obtain all the necessary information (class, method, arguments)
There is no problem here. But now concerning the arguments i do not now in advance the type of the arguments, so I should be able to convert an argument by getting its type and insert it into a Any bject to be sent, is it possible?
If Know in advance the type such as seq.insert_string("bum") it works but I want to do it dynamically.
Use the DynAny interfaces, if your ORB supports them. They can do exactly what you want. From CORBA Explained Simply:
If an application wants to manipulate data embedded inside an any
without being compiled with the relevant stub code then the
application must convert the any into a DynAny. There are sub-types
of DynAny for each IDL construct. For example, there are types called
DynStruct, DynUnion, DynSequence and so on.
The operations on the DynAny interfaces allow a programmer to
recursively drill down into a compound data-structure that is
contained within the DynAny and, in so doing, decompose the compound
type into its individual components that are built-in types.
Operations on the DynAny interface can also be used to recursively
build up a compound data-structure from built-in types.
Related
The ABAP documentation lists three kinds of modularization structures:
Methods. Problem: methods don't accept parameters.
Function modules. Problem: FMs belong to function groups and can be called from other programs. Apparently they are meant to be reused across the system.
Forms. Problem: are marked as "obsolete".
Is there a newer structure that replaces the obsolete FORM structure, that is:
Local to our program.
Accepts parameters.
Doesn't require ABAP Objects syntax ?
Methods. Problem: methods don't accept parameters.
I am not sure how you came to that conclusion, because methods support parameters very well. The only limitation compared to FORMs is that they don't support TABLES parameters to take a TABLE WITH HEADER LINE. But they support CHANGING parameters with internal tables, which covers any case where you don't actually need the header-line. And in the rare case that you are indeed forced to deal with a TABLE WITH HEADER LINE and the method actually needs the header-line (I pity you), you can pass the header-line as a separate parameter.
You declare a method with parameters like this:
CLASS lcl_main DEFINITION.
METHODS foo
IMPORTING iv_bar TYPE i
EXPORTING es_last_message TYPE bapiret2
CHANGING ct_all_messages TYPE bapiret2_t.
ENDCLASS.
And you call it either like that:
main->foo( IMPORTING iv_bar = 1
EXPORTING es_last_message = t_messages
CHANGING ct_all_messages = t_messages[] ).
or with the more classic syntax like that:
CALL METHOD main->foo
IMPORTING iv_bar = 1
EXPORTING es_last_message = t_messages
CHANGING ct_all_messages = t_messages[].
Function modules. Problem: FMs belong to function groups and can be called from other programs. Apparently they are meant to be reused across the system.
Yes, function modules are supposed to be global while FORM's are supposed to be local (supposed to: You can actually call a FORM in another program with PERFORM formname IN PROGRAM programname).
But classes can be local or global, depending on how you created them. A global class can be used by any program in the system. So function groups can be substituted by global classes in most cases.
The one use-case where function modules can not be substituted by methods of classes is for RFC-enabled function modules. RFC is the Remote Function Call protocol which allows external systems to execute a function module in another system via network. However, if you do need some other system to communicate with your SAP system, then you might want to consider to use webservices instead, which can be implemented with pure ABAP-OO. And they also offer much better interoperability with non-SAP systems because they don't require a proprietary protocol.
Is there a newer structure that replaces the obsolete FORM structure, that [...] Doesn't require ABAP Objects syntax ?
Here is where you got a problem. ABAP Objects syntax is the way we are supposed to program ABAP now. There is currently a pretty hard push to forget all the non-OO ways to write ABAP and fully embrace the ABAP-OO styles of writing code. With every new release, more classic syntax which can be substituted by ABAP-OO syntax gets declared obsolete.
However, so far SAP follows the philosophy of 100% backward compatibility. While they might try their best to compel people to not use certain obsolete language constructs (including adding scary-sounding warnings to the syntax check), they very rarely actually remove any language features. They hardly can, because they themselves got tons of legacy code which uses them and which would be far too expensive and risky to rewrite. The only case I can think of when they actually removed language features was when they introduced Unicode which made certain direct assignments between now incompatible types syntactically illegal.
You are having some wrong information there. Don't know what system version are you in, but this info could help you out:
Methods: They actually accept parameters (should be crazy if they wouldn't). In fact, they accept IMPORTING, EXPORTING, CHANGING and RETURNING parameters.
Forms: Indeed they are obsolete, but in my opinion there is no risk in using then, almost every standard component relies in programs made out of FORMS. FORMS are a core concept in ABAP programming. They are the "function" or "def" of many other languages. They accept USING, CHANGING and TABLES parameters.
I'm trying to solve the problem of serializing and deserializing Box<SomeTrait>. I know that in the case of a closed type hierarchy, the recommended way is to use an enum and there are no issues with their serialization, but in my case using enums is an inappropriate solution.
At first I tried to use Serde as it is the de-facto Rust serialization mechanism. Serde is capable of serializing Box<X> but not in the case when X is a trait. The Serialize trait can’t be implemented for trait objects because it has generic methods. This particular issue can be solved by using erased-serde so serialization of Box<SomeTrait> can work.
The main problem is deserialization. To deserialize polymorphic type you need to have some type marker in serialized data. This marker should be deserialized first and after that used to dynamically get the function that will return Box<SomeTrait>.
std::any::TypeId could be used as a marker type, but the main problem is how to dynamically get the deserialization function. I do not consider the option of registering a function for each polymorphic type that should be called manually during application initialization.
I know two possible ways to do it:
Languages that have runtime reflection like C# can use it to get
deserialization method.
In C++, the cereal library uses magic of static objects to register deserializer in a static map at the library initialization time.
But neither of these options is available in Rust. How can deserialization of polymorphic objects be added in Rust if at all?
This has been implemented by dtolnay.
The concept is quite clever ans is explained in the README:
How does it work?
We use the inventory crate to produce a registry of impls of your trait, which is built on the ctor crate to hook up initialization functions that insert into the registry. The first Box<dyn Trait> deserialization will perform the work of iterating the registry and building a map of tags to deserialization functions. Subsequent deserializations find the right deserialization function in that map. The erased-serde crate is also involved, to do this all in a way that does not break object safety.
To summarize, every implementation of the trait declared as [de]serializable is registered at compile-time, and this is resolved at runtime in case of [de]serialization of a trait object.
All your libraries could provide a registration routine, guarded by std::sync::Once, that register some identifier into a common static mut, but obviously your program must call them all.
I've no idea if TypeId yields consistent values across recompiles with different dependencies.
A library to do this should be possible. To create such a library, we would create a bidirectional mapping from TypeId to type name before using the library, and then use that for serialization/deserialization with a type marker. It would be possible to have a function for registering types that are not owned by your package, and to provide a macro annotation that automatically does this for types declared in your package.
If there's a way to access a type ID in a macro, that would be a good way to instrument the mapping between TypeId and type name at compile time rather than runtime.
I know it is possible to use the Win32_ProgIDSpecification to iterate over the ProgIDs available on the system.
Is there anything in WMI that allows iterating over a given ProgID and returning type information -- methods and their parameters and return types, properties and their return types, and events?
(I am looking to create a tool to generate TypeScript definitions for Automation objects, given a specific ProgID. At this point, I am using .NET reflection on types referenced in a C# project.)
No, the WMI doesn't provide such info, instead you must use the ITypeInfo Interface to retrieve that type information.
I have a COM component that has get_XXX and put_XXX methods inside it. I used it in a .NET project and a RCW was generated for it. I now see get_XXX and set_XXX methods and NOT the put_XXX one? Is that automatic or defined somewhere in IDL?
These are property accessor methods. A compiler that uses the COM server is expected to generate a call to get_Xxx() when the client program reads the property, put_Xxx() when it writes it. A special one that C# doesn't have at all is putref_Xxx(), used to unambiguously access an object instead of a value.
The normal translation performed by Tlbimp.exe is as a plain C# property. But that doesn't always work, C# is a lot more strict about what a property can look like:
The default property, the one that's annotated as DISPID_VALUE (dispid 0) must take a single argument to be compatible. This maps to the C# indexer property, the one that makes it look like you are indexing an array.
Any other property cannot take an argument, C# does not supported indexed properties other than the indexer.
C# does not have the equivalent of putref_Xxx(), the syntax ambiguity cannot occur in a C# program because of the previous two bullets. And the core reason that the C# team decided to put these restrictions in place, they greatly disliked ambiguity in the language.
So Tlbimp.exe is forced to deal with these restrictions, if the COM property accessors are not compatible then it must fall back to exposing them as plain methods instead of a property. With default names, they'll get the get_ and set_ prefixes. The latter one explains your question, they didn't pick put_ for an otherwise unclear reason.
Notable is that C# version 4 relaxed several of these restrictions, most of all to make interop with Office programs easier. Which was quite painful in earlier C# versions, to put it mildly. It extended the property syntax to lessen the pain, but only for COM interop. Very strongly recommended if you are still stuck on an old version of .NET, now is a good time to consider updating.
The properties themselves have to prefixes (put_ etc.), they have names, getter method, setter method, but no prefixes. Method table generated from type library receives prefixes to distinguish between getters and setters, hence the prefixes. Prefix string exactly depends on preference of the one who generates the names.
See also:
#pragma import attributes - raw_property_prefixes
By default, low-level propget, propput, and propputref methods are exposed by member functions named with prefixes of get_, put_, and putref_ respectively. These prefixes are compatible with the names used in the header files generated by MIDL.
This question already has answers here:
What is reflection and why is it useful?
(23 answers)
Closed 6 years ago.
I was just curious, why should we use reflection in the first place?
// Without reflection
Foo foo = new Foo();
foo.hello();
// With reflection
Class cls = Class.forName("Foo");
Object foo = cls.newInstance();
Method method = cls.getMethod("hello", null);
method.invoke(foo, null);
We can simply create an object and call the class's method, but why do the same using forName, newInstance and getMthod functions?
To make everything dynamic?
Simply put: because sometimes you don't know either the "Foo" or "hello" parts at compile time.
The vast majority of the time you do know this, so it's not worth using reflection. Just occasionally, however, you don't - and at that point, reflection is all you can turn to.
As an example, protocol buffers allows you to generate code which either contains full statically-typed code for reading and writing messages, or it generates just enough so that the rest can be done by reflection: in the reflection case, the load/save code has to get and set properties via reflection - it knows the names of the properties involved due to the message descriptor. This is much (much) slower but results in considerably less code being generated.
Another example would be dependency injection, where the names of the types used for the dependencies are often provided in configuration files: the DI framework then has to use reflection to construct all the components involved, finding constructors and/or properties along the way.
It is used whenever you (=your method/your class) doesn't know at compile time the type should instantiate or the method it should invoke.
Also, many frameworks use reflection to analyze and use your objects. For example:
hibernate/nhibernate (and any object-relational mapper) use reflection to inspect all the properties of your classes so that it is able to update them or use them when executing database operations
you may want to make it configurable which method of a user-defined class is executed by default by your application. The configured value is String, and you can get the target class, get the method that has the configured name, and invoke it, without knowing it at compile time.
parsing annotations is done by reflection
A typical usage is a plug-in mechanism, which supports classes (usually implementations of interfaces) that are unknown at compile time.
You can use reflection for automating any process that could usefully use a list of the object's methods and/or properties. If you've ever spent time writing code that does roughly the same thing on each of an object's fields in turn -- the obvious way of saving and loading data often works like that -- then that's something reflection could do for you automatically.
The most common applications are probably these three:
Serialization (see, e.g., .NET's XmlSerializer)
Generation of widgets for editing objects' properties (e.g., Xcode's Interface Builder, .NET's dialog designer)
Factories that create objects with arbitrary dependencies by examining the classes for constructors and supplying suitable objects on creation (e.g., any dependency injection framework)
Using reflection, you can very easily write configurations that detail methods/fields in text, and the framework using these can read a text description of the field and find the real corresponding field.
e.g. JXPath allows you to navigate objects like this:
//company[#name='Sun']/address
so JXPath will look for a method getCompany() (corresponding to company), a field in that called name etc.
You'll find this in lots of frameworks in Java e.g. JavaBeans, Spring etc.
It's useful for things like serialization and object-relational mapping. You can write a generic function to serialize an object by using reflection to get all of an object's properties. In C++, you'd have to write a separate function for every class.
I have used it in some validation classes before, where I passed a large, complex data structure in the constructor and then ran a zillion (couple hundred really) methods to check the validity of the data. All of my validation methods were private and returned booleans so I made one "validate" method you could call which used reflection to invoke all the private methods in the class than returned booleans.
This made the validate method more concise (didn't need to enumerate each little method) and garuanteed all the methods were being run (e.g. someone writes a new validation rule and forgets to call it in the main method).
After changing to use reflection I didn't notice any meaningful loss in performance, and the code was easier to maintain.
in addition to Jons answer, another usage is to be able to "dip your toe in the water" to test if a given facility is present in the JVM.
Under OS X a java application looks nicer if some Apple-provided classes are called. The easiest way to test if these classes are present, is to test with reflection first
some times you need to create a object of class on fly or from some other place not a java code (e.g jsp). at that time reflection is useful.