DLL zOS dynamic - dynamic

I'm compiling a COBOL program as a DLL in zOS using the compiler options
PGMN(LM),DLL,EXPORTALL
When I do this, it also forces the compile to be NODYNAM. In this context, is there some other parm I can use to force the CALLS to to other subprograms from this to be dynamic (i.e. resolved at run time).
I know I can use the CALL variable-name approach to accomplish this, but I can't do this with system routines like DSNELI, the DB2 call interface.
Does the IMPORT option have something to do with this?
Thanks!

All DLL's must be complied with NODYNAM. This cannot be avoided. As you pointed out using NODYNAM does
not preclude dynamic program calls using the CALL var-name approach. As long as you are using dynamic calls
to locally developed routines you will maintain all of the advantages of not having static linked modules in
your programs.
Be less concerned about static linked system modules such as CALL 'DSNELI'. These are
stub programs that will dynamically load the appropriate language interface module at
run time. See Universal Language Interface.

Generally speaking, you want the calls to those system routines to be static. The routines tend to be stubs that locate the "real" routine at runtime.

Related

Does functions in API make system calls themselves or system calls made by API are aided by system-call interface in the runtime support system?

I was going through the Dinosaur book by Galvin where I faced the difficulty as asked in the question.
Typically application developers design programs according to an application programming interface (API). The API specifies a set of functions that are available to an application programmer, including the parameters that are passed to each function and the return values the programmer can expect.
The text adds that:
Behind the scenes the functions that make up an API typically invoke the actual system calls on behalf of the application programmer. For example, the Win32 function CreateProcess() (which unsurprisingly is used to create a new process) actually calls the NTCreateProcess() system call in the Windows kernel.
From the above two points I came to know that: Programmers using the API, make the function calls to the API corresponding to the system call which they want to make. The concerning function in the API then actually makes the system call.
Next what the text says confuses me a bit:
The run-time support system (a set of functions built into libraries included with a compiler) for most programming languages provides a system-call interface that serves as the link to system calls made available by the operating system. The system-call interface intercepts function calls in the API and invokes the necessary system calls within the operating system. Typically, a number is associated with each system call, and the system-call interface maintains a table indexed according to these numbers. The system call interface then invokes the intended system call in the operating-system kernel and returns the status of the system call and any return values.
The above excerpt makes me feel that the functions in the API does not make the system calls directly. There are probably function built into the system-call interface of the runtime support system, which are waiting for an event of system call from the function in the API.
The above is a diagram in the text explaining the working of the system call interface.
The text later explains the working of a system call in the C standard library as follows:
which is quite clear.
I don't totally understand the terminology of the excerpts you shared. Some terminology is also wrong like in the blue image at the bottom. It says the standard C library provides system call interfaces while it doesn't. The standard C library is just a standard. It is a convention. It just says that, if you write a certain code, then the effect of that code when it is ran should be according to the convention. It also says that the C library intercepts printf() calls while it doesn't. This is general terminology which is confusing at best.
The C library doesn't intercept calls. As an example, on Linux, the open source implementation of the C standard library is glibc. You can browse it's source code here: https://elixir.bootlin.com/glibc/latest/source. When you write C/C++ code, you use standard functions which are specified in the C/C++ convention.
When you write code, this code will be compiled to assembly and then to machine code. Assembly is also a higher level representation of machine code. It is just closer to the actual code as it is easier to translate to it then C/C++. The easiest case to understand is when you compile code statically. When you compile code statically, all code is included in your executable. For example, if you write
#include <stdio.h>
int main() {
printf("Hello, World!");
return 0;
}
the printf() function is called in stdio.h which is a header provided by gcc written specifically for one OS or a set of UNIX-like OSes. This header provides prototypes which are defined in other .c files provided by glibc. These .c files provide the actual implementation of printf(). The printf() function will make a system call which rely on the presence of an OS like Linux to run. When you compile statically, the code is all included up to the system call. You can see my answer here: Who sets the RIP register when you call the clone syscall?. It specifically explains how system calls are made.
In the end you'll have something like assembly code pushing some arguments into some conventionnal registers then the actual syscall instruction which jumps to an MSR. I don't totally understand the mechanism behind printf() but it will jump to the Linux kernel's implementation of the write system call which will write to the console and return.
I think what confuses you is that the "runtime-support system" is probably referring to higher level languages which are not compiled to machine code directly like Python or Java. Java has a virtual machine which translates the bytecode produced by compilation to machine code during runtime using a virtual machine. It can be confusing to not make this distinction when talking about different languages. Maybe your book is lacking examples.

Wrapper to unmanaged code

How would you build a wrapper to unmanaged code in order to use it in managed code, and when exactly do you have to do that?
You don't often need a wrapper, many DLLs with straight-forward exported C functions can be pinvoked with the [DllImport] attribute. An exception for C exports would be a poorly designed DLL that requires the client code to release memory, that can't be done by the managed code since it doesn't have access to the allocator.
The case where you have to have a wrapper is a native C++ class. Managed code cannot pinvoke it directly since it doesn't know how to create an instance of the class (which requires knowing the size of the object and calling the constructor) nor how to destroy it (which requires calling the destructor). It is pretty easy to do in C++/CLI. Very mechanical, the SWIG project can do it automatically. Learning that tool is however more of an investment than learning how to write the wrapper.

When is it a good idea to use a vb.net Module

Some of my co-workers make extensive use of the VB.net concept of Modules. Unfortunately, I just don't 'get it'. I see no benefit in using modules over shared classes. Am I missing something? When would it be preferable to use a module? Or am I (as I do quite often in this language) 'just not getting it'?
In VB.net a module is a shared class. When they are compiled they are given a private constructor and methods set to shared.
There are some times when you are forced to use modules by the compiler (in the same way static classes are in C#) such as for extension methods which can not be created in side a VB.Net class.
By using modules for your helper methods you will make it easier to convert them over to extension methods later and restrict others from adding any instance methods or constructors.
That said they are a hang over from VB6 that did not support full OO programming and beyond standalone helper methods they would not widely be used.
A module is essentially the same as a shared class. The major difference is that in a module, there's no need for all the extra "shared"s, cause everything's implicitly shared. If you have no instance data and are just using the class as a kind of namespace for functions, then it's a better idea (IMO) to use a module instead and make that clear.

Converting c++ project to clr safe project

I need to work on converting a very huge c++ project to clr safe. The current c++ project has a lot of stuff from c++ like templates, generics, pointers, storage/stream, ole apis, zlib compression apis, inlines etc. Where can I find the datiled document for this type of conversion? Can you suggest some good book to refer to? If anyone of you have done such conversion, can I get some analysis from you?
I'll just cough up the MSDN Library article titled "How to: Migrate to /clr:safe
Visual C++ can generate verifiable components with using /clr:safe, which causes the compiler to generate errors for each non-verifiable code construct.
The following issues generate verifiability errors:
Native types. Even if it isn't used, the declaration of native classes, structures, pointers, or arrays will prevent compilation.
Global variables
Function calls into any unmanaged library, including common language runtime function calls
A verifiable function cannot contain a static_cast Operator for down-casting. The static_cast operator can be used for casting between primitive types, but for down-casting, safe_cast or a C-Style cast (which is implemented as a safe_cast) must be used.
A verifiable function cannot contain a reinterpret_cast operator (or any C-style cast equivalent).
A verifiable function cannot perform arithmetic on an interior_ptr. It may only assign to it and dereference it.
A verifiable function can only throw or catch pointers to reference types, so value types must be boxed before throwing.
A verifiable function can only call verifiable functions (such that calls to the common language runtime are not allowed, include AtEntry/AtExit, and so global constructors are disallowed).
A verifiable class cannot use Explicit.
If building an EXE, a main function cannot declare any parameters, so GetCommandLineArgs must be used to retrieve command-line arguments.
Making a non-virtual call to a virtual function.
Also, the following keywords cannot be used in verifiable code:
unmanaged and pack pragmas
naked and align __declspec modifiers
__asm
__based
__try and __except
I reckon that will keep you busy for a while. There is no magic wand to wave to turn native C++ into verifiable code. Are you sure this is worth the investment?
The vast majority of native C++ is entirely valid C++/CLI, including templates, inlines, etc, except the CLR STL is rather slow compared to the BCL. Also, native C++ doesn't have generics, only templates.
The reality of compiling as C++/CLI is to check the switch and push compile, and wait for it to throw errors.
Rewriting native C++ into safe C++/CLI will result in a code that is syntactically different, but semantically same as C#. If that is the case, why not rewrite directly in C#?
If you want to avoid what is essentially a complete rewrite, consider the following alternatives:
P/Invoke. Unfortunately, I'm unfamiliar whether this would isolate safe from unsafe code. Even if it can perform the isolation, you'll need to wrap your existing C++ code into procedural, C-like API, so it can be consumed by P/Invoke. On a plus side, unless your API is excessively chatty, you get to keep (most of) your native performance.
Wrapping your C++ into out-of-process COM server and using COM Interop to consume it from the manged code. This way, your managed code is completely protected from any corruption that might happen at C++ end and can remain "safe". The downside is a performance hit that you'll get for out-of-process marshaling and the implementation effort you'll need to expend to correctly implement the COM.

DynamicMethod in Cecil

Is there anything similar to Reflection.Emit.DynamicMethod in Cecil? Thanks.
DynamicMethod
Edit:
What about for the following things?
EmitCall (e.g.
IL.EmitCall(OpCodes.Callvirt, GetBuildKey, null);
IL.Emit(OpCodes.Unbox_Any, dependencyType);
)
LocalBuilder (e.g. LocalBuilder resolving = ilContext.IL.DeclareLocal(typeof(bool));)
System.Reflection.Emit.Label (e.g. Label existingObjectNotNull = buildContext.IL.DefineLabel();) //Do I have to use TextMap?
ILGenerator.BeginCatchBlock (e.g. ilContext.IL.BeginCatchBlock(typeof(Exception)); )
ILGenerator.MarkLabel (e.g. ilContext.IL.MarkLabel(parameterResolveFailed); )
ILGenerator.EndExceptionBlock() (e.g. ilContext.IL.EndExceptionBlock(); )
There's no way to create a DynamicMethod with Cecil, nor does it have an equivalent.
A DynamicMethod is strongly tied to the runtime, while Cecil is completely decoupled. The two of them have a completely separate type system. DynamicMethod are meant to be, well, dynamic, and as such have to use the System.Reflection type system, as it's the one available at runtime. Mono.Cecil has another representation of this type system suitable to static analysis, without having to actually load the assembly at runtime. So if you want to use a DynamicMethod, you have to use it along with its environment.
This question was originally asked, iirc, in the context of runtimes without DynamicMethods or SRE all-together, like the Compact Framework, where Cecil can be used to emit code at runtime.
Of course it's possible, but then you have to pay the price of loading the assembly, which is no small price on CF devices. It means that if you could somehow emulate a DynamicMethod by creating an assembly with only one static method with Cecil, it sounds a terrible idea. The assemblies would not be collectable (DynamicMethods are), making it a giant memory leak.
If you need to emit code at runtime on the Compact Framework, emit as less as possible, and emit as few assemblies as possible.