Can we export a function made available through a static library - dynamic

I have a static library say "A.lib" which contains a function int foo(). I have another dll say "B.dll" which consumes A.lib and uses the function foo() and also exports some other functions. Is it possible to export the function int foo() (imported from A.lib) from B.dll so that it can be consumed in a third dll say "C.dll".
I want to know whether it is possible or not, I dont want workarounds like making A.lib available to the C.dll. Also, I am not concerned if this is a bad design or not.
Thanks very much for your patience to read this through.

I had the same requirement - just found a different solution:
Assuming that A.lib has an A.h (that is consumed by source files used to build B.dll e.g. assuming that A.h contains prototypes for functions contained in A.lib), just add the following in A.h:
#pragma comment(linker, "/export:_foo")
This will instruct the linker to export foo() when building B.dll. Note the leading underscore - it is there because that's the true name of the symbol for foo() contained in A.lib (use dumpbin /symbols A.lib | findstr foo to see it). In my example foo() was using the __cdecl calling convention, but if you use __stdcall() or compile as C++, you'll get different name decoration, so you'll have to adjust the #pragma statement above as a result.
It doesn't matter if A.h gets included by many source files in B.dll - the linker doesn't complain if the exact same definition is made multiple times.
One "advantage" to this approach is that you don't even have to use the __declspec(dllexport) specifier on foo() in A.lib ...

Yes, it's possible but any code example is language dependent.
(for example in C you may simply export a function with the same name and C.dll will see it)

Related

implementing a module system in a programming language

I'm writing my own compiler and I'm struggling to implement a module system.
Can someone guide me, how should this be done? how other languages tackle this?
Also I'm trying to avoid what c and c++ do (header files).
I do like the module system in Go/Golang though.
I don't know if this is relevant, but I'm using LLVM (maybe there's a magic way to import symbols).
my initial approach:
read and parse the entry point source file ie. main.mylang.
go through the imports of main.mylang
for each import: read, parse and resolve it's imports
...
this leads to a tree structure:
main.mylang: import1.mylang, import2.mylang, import3.mylang
import1.mylang: import4.mylang, import5.mylang
import2.mylang: import6.mylang
... etc.
then I would traverse each node and copy it's symbols (functions, global variables, etc.) to the parent node's symbol table. if a parent node is null, it's an entry point file and the compiler can start output object files.
why do I think that this is bad?
it's very slow, even when compiling 3-5 source files
it's easy to cause name collisions
you have to import the entire symbol table, because the imported file's exported symbols depend on the internal ones.
for example: imagine an exported function that modifies an internal global variable
Thanks in advance
I think your approach is really good. Compile time speed is not that important, usability is. To prevent name collisions you can use some kind of module-namespace (importname.foo() instead of just foo()) and whenever foo does not exist allow both methods. Alternatively you could insert a placeholder in the parents symbol table and whenever the user uses that name you throw a compile time error (something like ambiguous symbol).
that would look like this:
main.mylang
import module1
import module2
int main() {}
module1.mylang
import module2
void foo() {}
void bar() {}
module2.mylang
import module1
void bar() {}
void fun() {}
After finding loops, the tree would look like this:
main
├──module1
│  └──module2
└──module2
  └──module1
And a graph like this:
main
├─>main()
├─>foo() (module1)
├─>bar() (defined twice, throw error when used)
├─>fun() (module2)
├─>module1<───────────┐
│  ├─>foo() (module1) │
│  └─>bar() (module1) │
└─>import2<───────────┘
  ├─>bar() (module2)
  └─>fun() (module2)
I don't know much about llvm, but I am pretty sure normal tables are not enough to archive this. You will at least need nested tables if not even a graph like structure like I described. Also this is not possible with classical C/C++ architecture, except if you use unique identifiers as symbols and don't let the user know (like c++ function overloading). For example you could call one function __import1_bar and the other __import2_bar and whenever the user uses bar() you look up in this graph which one he wants to call. In the main function using import1.bar() will lead you to __import1_bar (follow the graph) and import2.bar() or import1.import2.bar() will lead you to __import2_bar.
Good Luck figuring that out. But it is certainly a interesting problem.

Separating operator definitions for a class to other files and using them

I have 4 files all in the same directory: main.rakumod, infix_ops.rakumod, prefix_ops.rakumod and script.raku:
main module has a class definition (class A)
*_ops modules have some operator routine definitions to write, e.g., $a1 + $a2 in an overloaded way.
script.raku tries to instantaniate A object(s) and use those user-defined operators.
Why 3 files not 1? Since class definition might be long and separating overloaded operator definitions in files seemed like a good idea for writing tidier code (easier to manage).
e.g.,
# main.rakumod
class A {
has $.x is rw;
}
# prefix_ops.rakumod
use lib ".";
use main;
multi prefix:<++>(A:D $obj) {
++$obj.x;
$obj;
}
and similar routines in infix_ops.rakumod. Now, in script.raku, my aim is to import main module only and see the overloaded operators also available:
# script.raku
use lib ".";
use main;
my $a = A.new(x => -1);
++$a;
but it naturally doesn't see ++ multi for A objects because main.rakumod doesn't know the *_ops.rakumod files as it stands. Is there a way I can achieve this? If I use prefix_ops in main.rakumod, it says 'use lib' may not be pre-compiled perhaps because of circular dependentness
it says 'use lib' may not be pre-compiled
The word "may" is ambiguous. Actually it cannot be precompiled.
The message would be better if it said something to the effect of "Don't put use lib in a module."
This has now been fixed per #codesections++'s comment below.
perhaps because of circular dependentness
No. use lib can only be used by the main program file, the one directly run by Rakudo.
Is there a way I can achieve this?
Here's one way.
We introduce a new file that's used by the other packages to eliminate the circularity. So now we have four files (I've rationalized the naming to stick to A or variants of it for the packages that contribute to the type A):
A-sawn.rakumod that's a role or class or similar:
unit role A-sawn;
Other packages that are to be separated out into their own files use the new "sawn" package and does or is it as appropriate:
use A-sawn;
unit class A-Ops does A-sawn;
multi prefix:<++>(A-sawn:D $obj) is export { ++($obj.x) }
multi postfix:<++>(A-sawn:D $obj) is export { ($obj.x)++ }
The A.rakumod file for the A type does the same thing. It also uses whatever other packages are to be pulled into the same A namespace; this will import symbols from it according to Raku's standard importing rules. And then relevant symbols are explicitly exported:
use A-sawn;
use A-Ops;
sub EXPORT { Map.new: OUTER:: .grep: /'fix:<'/ }
unit class A does A-sawn;
has $.x is rw;
Finally, with this setup in place, the main program can just use A;:
use lib '.';
use A;
my $a = A.new(x => -1);
say $a++; # A.new(x => -1)
say ++$a; # A.new(x => 1)
say ++$a; # A.new(x => 2)
The two main things here are:
Introducing an (empty) A-sawn package
This type eliminates circularity using the technique shown in #codesection's answer to Best Way to Resolve Circular Module Loading.
Raku culture has a fun generic term/meme for techniques that cut through circular problems: "circular saws". So I've used a -sawn suffix of the "sawn" typename as a convention when using this technique.[1]
Importing symbols into a package and then re-exporting them
This is done via sub EXPORT { Map.new: ... }.[2] See the doc for sub EXPORT.
The Map must contain a list of symbols (Pairs). For this case I've grepped through keys from the OUTER:: pseudopackage that refers to the symbol table of the lexical scope immediately outside the sub EXPORT the OUTER:: appears in. This is of course the lexical scope into which some symbols (for operators) have just been imported by the use Ops; statement. I then grep that symbol table for keys containing fix:<; this will catch all symbol keys with that string in their name (so infix:<..., prefix:<... etc.). Alter this code as needed to suit your needs.[3]
Footnotes
[1] As things stands this technique means coming up with a new name that's different from the one used by the consumer of the new type, one that won't conflict with any other packages. This suggests a suffix. I think -sawn is a reasonable choice for an unusual and distinctive and mnemonic suffix. That said, I imagine someone will eventually package this process up into a new language construct that does the work behind the scenes, generating the name and automating away the manual changes one has to make to packages with the shown technique.
[2] A critically important point is that, if a sub EXPORT is to do what you want, it must be placed outside the package definition to which it applies. And that in turn means it must be before a unit package declaration. And that in turn means any use statement relied on by that sub EXPORT must appear within the same or outer lexical scope. (This is explained in the doc but I think it bears summarizing here to try head off much head scratching because there's no error message if it's in the wrong place.)
[3] As with the circularity saw aspect discussed in footnote 1, I imagine someone will also eventually package up this import-and-export mechanism into a new construct, or, perhaps even better, an enhancement of Raku's built in use statement.
Hi #hanselmann here is how I would write this (in 3 files / same dir):
Define my class(es):
# MyClass.rakumod
unit module MyClass;
class A is export {
has $.x is rw;
}
Define my operators:
# Prefix_Ops.rakumod
unit module Prefix_Ops;
use MyClass;
multi prefix:<++>(A:D $obj) is export {
++$obj.x;
$obj;
}
Run my code:
# script.raku
use lib ".";
use MyClass;
use Prefix_Ops;
my $a = A.new(x => -1);
++$a;
say $a.x; #0
Taking my cue from the Module docs there are a couple of things I am doing different:
Avoiding the use of main (or Main, or MAIN) --- I am wary that MAIN is a reserved name and just want to keep clear of engaging any of that (cool) machinery
Bringing in the unit module declaration at the top of each 'rakumod' file ... it may be possible to use bare files in Raku ... but I have never tried this and would say that it is not obvious from the docs that it is even possible, or supported
Now since I wanted this to work first time you will note that I use the same file name and module name ... again it may be possible to do that differently (multiple modules in one file and so on) ... but I have not tried that either
Using the 'is export' trait where I want my script to be able to use these definitions ... as you will know from close study of the docs ;-) is that each module has it's own namespace (the "stash") and we need export to shove the exported definitions into the namespace of the script
As #raiph mentions you only need the script to define the module library location
Since you want your prefix multi to "know" about class A then you also need to use MyClass in the Prefix_Ops module
Anyway, all-in-all, I think that the raku module system exemplifies the unique combination of "easy things easy and hard thinks doable" ... all I had to do with your code (which was very close) was tweak a few filenames and sprinkle in some concise concepts like 'unit module' and 'is export' and it really does not look much different since raku keeps all the import/export machinery under the surface like the swan gliding over the river...

Code sharing between multiple independently compiled binaries/hex files

I'm looking for documentation/information on how to share information/code between multiple binaries compiled for a Cortex-m/0/4/7 architectures. The two binaries will be on the same chip and same architecture. They are flashed at different locations and sets the main stack pointer and resets the program counter so that one binary "jumps" to the other binary. I want to share code between these two binaries.
I've done a simple copy of an array of function pointers into a section defined in the linker script into RAM. Then read the RAM out in the other binary and cast it to an array then use the index to call functions in the other binary. This does work as a Proof-of-concept, but I think what I'm looking for is a bit more complex. As I want some way of describing compatibility between the two binaries. I want some what the functionality of shared libraries, but I'm unsure if I need position independent code.
As an example how the current copy process is done it is basically:
Source binary:
void copy_func()
{
memncpy(array_of_function_pointers, fixed_size, address_custom_ram_section)
}
Binary which is jumped too from source binary:
array_fp_type get_funcs()
{
memncpy(adress_custom_ram_section, fixed_size, array_of_fp)
return array_of_fp;
}
Then I can use the array_of_fp to call into functions residing in the source binary from the jump binary.
So what I'm looking for is some resources or input for someone who have implemented a similar system. Like I would like to not have to have a custom RAM section where I'm copying the function pointers into.
I would be fine with having the compilation step of source binary outputting something which can be included into the compilation step of the jump binary. However it needs to be reproducible and recompiling the source binary shouldn't break the compatibility with the jump binary(even if it included a different file from what is now outputted) as long as you don't change the interface.
To clarify source binary shouldn't require any specific knowledge about the jump binary. The code should not reside in both binaries as this would defeat the purpose of this mechanism. The overall goal if this mechanism is a way to save space when creating multi-binary applications on cortex-m processors.
Any ideas or links to resources are welcome. If you have any more questions feel free to comment on the question and I'll try to answer it.
Its very hard for me to picture what you want to do, but if you're interested in having an application link against your bootloader/ROM, then see Loading symbol file while linking for a hint on what you could do.
Build your "source"(?) image, scrape its mapfile and make a symbol file, then use that when you link your "jump"(?) image.
This does mean you need to link your "jump" image against a specific version of your "source" image.
If you need them to be semi-version independent (i.e. you define a set of functions that get exported, but you can rebuild on either side), then you need to export function pointers at known locations in your "source" image and link against those function pointers in your "jump" image. You can simplify the bookkeeping by making a structure of function pointers access the functions through that on either side.
For example:
shared_functions.h:
struct FunctionPointerTable
{
void(*function1)(int);
void(*function2)(char);
};
extern struct FunctionPointerTable sharedFunctions;
Source file in "source" image:
void function1Implementation(int a)
{
printf("You sent me an integer: %d\r\n", a);
function2Implementation((char)(a%256))
sharedFunctions.function2((char)(a%256));
}
void function2Implementation(char b)
{
printf("You sent me an char: %c\r\n", b);
}
struct FunctionPointerTable sharedFunctions =
{
function1Implementation,
function2Implementation,
};
Source file in "jump" image:
#include "shared_functions.h"
sharedFunctions.function1(1024);
sharedFunctions.function2(100);
When you compile/link the "source", take its mapfile and extract the location of sharedFunctions and create a symbol file that is linked with the source the "jump" image.
Note: the printfs (or anything directly called by the shared functions) would come from the "source" image (and not the "jump" image).
If you need them to come from the "jump" image (or be overridable) , then you need to access them through the same function pointer table, and the "jump" image needs to fix the function pointer table up with its version of the relevant function. I updated the function1() to show this. The direct call to function2 will always be the "source" version. The shared function call version of it will go through the jump table and call the "source" version unless the "jump" image updates the function table to point to its implementation.
You CAN get away from the structure, but then you need to export the function pointers one by one (not a big problem), but you want to keep them in order and at a fixed location, which means explicitly putting them in the linker descriptor file, etc. etc. I showed the structure method to distill it down to the easiest example.
As you can see, things get pretty hairy, and there is some penalty (calling through the function pointer is slower because you need to load up the address to jump to)
As explained in comment, we could imagine an application and a bootloader relying on same dynamic library. So application and bootloader rely on library, application can be changed without impact on library or boot.
I did not find an easy way to do a shared library with arm-none-eabi-gcc. However
this document gives some alternatives to shared libraries. I your case, I would recommand the jump table solution.
Write a library with the functions that need to be used in bootloader and in applicative.
"library" code
typedef void (*genericFunctionPointer)(void)
// use the linker script to set MySection at a known address
// I think this could be a structure like Russ Schultz solution but struct may or may not compile identically in lib and boot. However yes struct would be much easyer and avoiding many function pointer cast.
const genericFunctionPointer FpointerArray[] __attribute__ ((section ("MySection")))=
{
(genericFunctionPointer)lib_f1,
(genericFunctionPointer)lib_f2,
}
void lib_f1(void)
{
//some code
}
uint8_t lib_f2(uint8_t param)
{
//some code
}
applicative and/or bootloader code
typedef void (*genericFunctionPointer)(void)
// Use the linker script to set MySection at same address as library was compiled
// in linker script also put this section as `NOLOAD` because it is init by library and not by our code
//volatile is needed here because you read in flash memory and compiler may initialyse usage of this array to NULL pointers
volatile const genericFunctionPointer FpointerArray[NB_F] __attribute__ ((section ("MySection")));
enum
{
lib_f1,
lib_f2,
NB_F,
}
int main(void)
{
(correctCastF1)(FpointerArray[lib_f1])();
uint8_t a = (correctCastF2)(FpointerArray[lib_f2])(10);
}
You can look into using linker sections. If you have your bootloader source code in folder bootloader, you can use
SECTIONS
{
.bootloader:
{
build_output/bootloader/*.o(.text)
} >flash_region1
.binary1:
{
build_output/binary1/*.o(.text)
} >flash_region2
.binary2:
{
build_output/binary2/*.o(.text)
} >flash_region3
}

Using modules to load a group of related functions

I want to use Raku Modules to group some functions, I often use. Because these functions are all loosely coupled, I don't like to add them in a class.
I like the idea of use, where you can select, which functions should be imported, but I don't like it, that the functions, which are imported are then stored in the global namespace.
For example if I have a file my_util.pm6:
#content of my_util.pm6
unit module my_util;
our sub greet($who) is export(:greet) {
say $who;
}
sub greet2($who) is export(:greet2) {
say $who;
}
sub greet3($who) is export(:greet3) {
say $who;
}
and a file test.p6:
#!/usr/bin/perl6
#content of test.p6
use v6.c;
use lib '.';
use my_util :greet2;
greet("Bob"); #should not work (because no namespace given) and also doesn't work
greet2("Bob"); #should not work (because no namespace given) but actually works
greet3("Bob"); #should not work (because no namespace given) and also doesn't work
my_util::greet("Alice"); #works, but should not work (because it is not imported)
my_util::greet2("Alice"); #should work, but doesn't work
my_util::greet3("Alice"); #should not work (because it is not imported) and also doesn't work
I would like to call all functions via my_util::greet() and not via greet() only.
The function greet() defined in my_util.pm6 comes very close to my requirements, but because it is defined as our, it is always imported. What I like is the possibility, to select which functions should be imported and it should be possible to leave it in the namespace defined by the module (i.e. it doesn't pollute the global namespace)
Does anyone know, how I can achieve this?
To clear up some potential confusion...
Lexical scopes and package symbol tables are different things.
my adds a symbol to the current lexical scope.
our adds a symbol to the current lexical scope, and to the public symbol table of the current package.
use copies the requested symbols into the current lexical scope.
That's called "importing".
The :: separator does a package lookup – i.e. foo::greet looks up the symbol greet in the public symbol table of package foo.
This doesn't involve any "importing".
As for what you want to achieve...
The public symbol table of a package is the same no matter where it is referenced from... There is no mechanism for making individual symbols in it visible from different scopes.
You could make the colons part of the actual names of the subroutines...
sub foo::greet($who) is export(:greet) { say "Hello, $who!" }
# This subroutine is now literally called "foo::greet".
...but then you can't call it in the normal way anymore (because the parser would interpret that as rule 4 above), so you would have to use the clunky "indirect lexical lookup" syntax, which is obviously not what you want:
foo::greet "Sam"; # Could not find symbol '&greet'
::<&foo::greet>( "Sam" ); # Hello, Sam!
So, your best bet would be to either...
Declare the subroutines with our, and live with the fact that all of them can be accessed from all scopes that use the module.
Or:
Add the common prefix directly to the subroutine names, but using an unproblematic separator (such as the dash), and then import them normally:
unit module foo;
sub foo-greet($who) is export(:greet) { ... }
sub foo-greet2($who) is export(:greet2) { ... }
sub foo-greet3($who) is export(:greet3) { ... }

How to load a dll into VS c++, which acts like a wrapper to another CAPL code?

I am trying to reference functions in a 3rd party dll file through CAPL Script. Since, I cannot directly call them, I am trying to create a wrapper which exports the functions in the dll.
int MA_Init(char *TbName, int Option); is the function in the dll file.
The wrapper code for this is
int CAPLEXPORT far CAPLPASCAL CMA_Init(char *TbName, int Option)
{
return MA_Init(*TbName, Option);
}
I am trying to use
HINSTANCE DllHandel = loadlibrary("C:\\Turbo.dll"); to load the library and
typedef int(*TESTFnptr)(char, int);
TESTFnptr fn= (TESTFnptr)getprocaddress(DllHandle, "MA_Init"); to resolve the function address.
However the compiler says the function "MA_Init()" is not defined. I am not sure if I am using the correct procedure to load the dll into my visual C++ project. Has anyone tried doing this or knows how it's done? Thank you very much.
The standard procedure would be to include the corresponding .lib file to VS project. Go to "Project - Properties - Configuration Properties - Linker - Additional Dependencies" and add turbo.lib on a new line. Then you'll need to include the corresponding turbo.h header file which contains the definition for MA_Init function.
In this case, you'll be able to call MA_Init directly, as you do now. The compiler will happily find the definition of MA_Init in the header file, and the linker will find the reference to MA_Init in the .lib file.
If you don't have turbo.h file, you can create one yourself provided you know the prototypes of all functions you want to use. Just put definitions like
int MA_Init(char *TbName, int Option);
there and include it.
If you don't have turbo.lib file, you'll have to proceed with LoadLibrary and GetProcAddress. Obviously, you cannot call MA_Init by name in this case, since it is undefined. You'll have to call the pointer returned by GetProcAddress instead:
TESTFnptr fn = (TESTFnptr)GetProcAddress(DllHandle, "MA_Init");
int CAPLEXPORT far CAPLPASCAL CMA_Init(char *TbName, int Option)
{
return fn(TbName, Option);
}
PS. Notice I removed the start in front of TbName?
PPS. Don't forget to include your wrapper function, CMA_Init, to CAPL_DLL_INFO_LIST, otherwise it will not be accessible in CANoe/CANalyzer.