i've assumed that dumping a .bc file from a module was a trivial operation, but now,
first time i have to actually do it from code, for the life of me i
can't find one missing step in the process:
static void WriteModule ( const Module * M, BitstreamWriter & Stream )
http://llvm.org/docs/doxygen/html/BitcodeWriter_8cpp.html#a828cec7a8fed9d232556420efef7ae89
to write that module, first i need a BistreamWriter
BitstreamWriter::BitstreamWriter (SmallVectorImpl< char > &O)
http://llvm.org/docs/doxygen/html/classllvm_1_1BitstreamWriter.html
and for a BitstreamWriter i need a SmallVectorImpl. But, what next?
Should i write the content of the SmallVectorImpl byte by byte on a
file handler myself? is there a llvm api for this? do i need something
else?
The WriteModule function is static within lib/Bitcode/Writer/BitcodeWriter.cpp, which means it's not there for outside consumption (you can't even access it).
The same file has another function, however, called WriteBitcodeToFile, with this interface:
/// WriteBitcodeToFile - Write the specified module to the specified output
/// stream.
void llvm::WriteBitcodeToFile(const Module *M, raw_ostream &Out);
I can't imagine a more convenient interface. The header file declaring it is ./include/llvm/Bitcode/ReaderWriter.h, by the way.
I use following code :
std::error_code EC;
llvm::raw_fd_ostream OS("module", EC, llvm::sys::fs::F_None);
WriteBitcodeToFile(pBiFModule, OS);
OS.flush();
and then disassemble using llvm-dis.
Related
I'm looking for documentation/information on how to share information/code between multiple binaries compiled for a Cortex-m/0/4/7 architectures. The two binaries will be on the same chip and same architecture. They are flashed at different locations and sets the main stack pointer and resets the program counter so that one binary "jumps" to the other binary. I want to share code between these two binaries.
I've done a simple copy of an array of function pointers into a section defined in the linker script into RAM. Then read the RAM out in the other binary and cast it to an array then use the index to call functions in the other binary. This does work as a Proof-of-concept, but I think what I'm looking for is a bit more complex. As I want some way of describing compatibility between the two binaries. I want some what the functionality of shared libraries, but I'm unsure if I need position independent code.
As an example how the current copy process is done it is basically:
Source binary:
void copy_func()
{
memncpy(array_of_function_pointers, fixed_size, address_custom_ram_section)
}
Binary which is jumped too from source binary:
array_fp_type get_funcs()
{
memncpy(adress_custom_ram_section, fixed_size, array_of_fp)
return array_of_fp;
}
Then I can use the array_of_fp to call into functions residing in the source binary from the jump binary.
So what I'm looking for is some resources or input for someone who have implemented a similar system. Like I would like to not have to have a custom RAM section where I'm copying the function pointers into.
I would be fine with having the compilation step of source binary outputting something which can be included into the compilation step of the jump binary. However it needs to be reproducible and recompiling the source binary shouldn't break the compatibility with the jump binary(even if it included a different file from what is now outputted) as long as you don't change the interface.
To clarify source binary shouldn't require any specific knowledge about the jump binary. The code should not reside in both binaries as this would defeat the purpose of this mechanism. The overall goal if this mechanism is a way to save space when creating multi-binary applications on cortex-m processors.
Any ideas or links to resources are welcome. If you have any more questions feel free to comment on the question and I'll try to answer it.
Its very hard for me to picture what you want to do, but if you're interested in having an application link against your bootloader/ROM, then see Loading symbol file while linking for a hint on what you could do.
Build your "source"(?) image, scrape its mapfile and make a symbol file, then use that when you link your "jump"(?) image.
This does mean you need to link your "jump" image against a specific version of your "source" image.
If you need them to be semi-version independent (i.e. you define a set of functions that get exported, but you can rebuild on either side), then you need to export function pointers at known locations in your "source" image and link against those function pointers in your "jump" image. You can simplify the bookkeeping by making a structure of function pointers access the functions through that on either side.
For example:
shared_functions.h:
struct FunctionPointerTable
{
void(*function1)(int);
void(*function2)(char);
};
extern struct FunctionPointerTable sharedFunctions;
Source file in "source" image:
void function1Implementation(int a)
{
printf("You sent me an integer: %d\r\n", a);
function2Implementation((char)(a%256))
sharedFunctions.function2((char)(a%256));
}
void function2Implementation(char b)
{
printf("You sent me an char: %c\r\n", b);
}
struct FunctionPointerTable sharedFunctions =
{
function1Implementation,
function2Implementation,
};
Source file in "jump" image:
#include "shared_functions.h"
sharedFunctions.function1(1024);
sharedFunctions.function2(100);
When you compile/link the "source", take its mapfile and extract the location of sharedFunctions and create a symbol file that is linked with the source the "jump" image.
Note: the printfs (or anything directly called by the shared functions) would come from the "source" image (and not the "jump" image).
If you need them to come from the "jump" image (or be overridable) , then you need to access them through the same function pointer table, and the "jump" image needs to fix the function pointer table up with its version of the relevant function. I updated the function1() to show this. The direct call to function2 will always be the "source" version. The shared function call version of it will go through the jump table and call the "source" version unless the "jump" image updates the function table to point to its implementation.
You CAN get away from the structure, but then you need to export the function pointers one by one (not a big problem), but you want to keep them in order and at a fixed location, which means explicitly putting them in the linker descriptor file, etc. etc. I showed the structure method to distill it down to the easiest example.
As you can see, things get pretty hairy, and there is some penalty (calling through the function pointer is slower because you need to load up the address to jump to)
As explained in comment, we could imagine an application and a bootloader relying on same dynamic library. So application and bootloader rely on library, application can be changed without impact on library or boot.
I did not find an easy way to do a shared library with arm-none-eabi-gcc. However
this document gives some alternatives to shared libraries. I your case, I would recommand the jump table solution.
Write a library with the functions that need to be used in bootloader and in applicative.
"library" code
typedef void (*genericFunctionPointer)(void)
// use the linker script to set MySection at a known address
// I think this could be a structure like Russ Schultz solution but struct may or may not compile identically in lib and boot. However yes struct would be much easyer and avoiding many function pointer cast.
const genericFunctionPointer FpointerArray[] __attribute__ ((section ("MySection")))=
{
(genericFunctionPointer)lib_f1,
(genericFunctionPointer)lib_f2,
}
void lib_f1(void)
{
//some code
}
uint8_t lib_f2(uint8_t param)
{
//some code
}
applicative and/or bootloader code
typedef void (*genericFunctionPointer)(void)
// Use the linker script to set MySection at same address as library was compiled
// in linker script also put this section as `NOLOAD` because it is init by library and not by our code
//volatile is needed here because you read in flash memory and compiler may initialyse usage of this array to NULL pointers
volatile const genericFunctionPointer FpointerArray[NB_F] __attribute__ ((section ("MySection")));
enum
{
lib_f1,
lib_f2,
NB_F,
}
int main(void)
{
(correctCastF1)(FpointerArray[lib_f1])();
uint8_t a = (correctCastF2)(FpointerArray[lib_f2])(10);
}
You can look into using linker sections. If you have your bootloader source code in folder bootloader, you can use
SECTIONS
{
.bootloader:
{
build_output/bootloader/*.o(.text)
} >flash_region1
.binary1:
{
build_output/binary1/*.o(.text)
} >flash_region2
.binary2:
{
build_output/binary2/*.o(.text)
} >flash_region3
}
I've inherited a piece of custom test equipment with a control library built in a COM object, and I'm trying to connect it to our Tcl test script library. I can connect to the DLL using TCOM, and do some simple control operations with single int parameters. However, certain features are controlled by passing in a C/C++ struct that contains the control blocks, and attempting to use them in TCOM is giving me an error 0x80020005 {Type mismatch.}. The struct is defined in the .idl file, so it's available to TCOM to use.
The simplest example is a particular call as follows:
C++ .idl file:
struct SourceScaleRange
{
float MinVoltage;
float MaxVoltage;
};
interface IAnalogIn : IDispatch{
...
[id(4), helpstring("method GetAdcScaleRange")] HRESULT GetAdcScaleRange(
[out] struct SourceScaleRange *scaleRange);
...
}
Tcl wrapper:
::tcom::import [file join $::libDir "PulseMeas.tlb"] ::char
set ::characterizer(AnalogIn) [::char::AnalogIn]
set scaleRange ""
set response [$::characterizer(AnalogIn) GetAdcScaleRange scaleRange]
Resulting error:
0x80020005 {Type mismatch.}
while executing
"$::characterizer(AnalogIn) GetAdcScaleRange scaleRange"
(procedure "charGetAdcScaleRange" line 4)
When I dump TCOM's methods, it knows of the name of the struct, at least, but it seems to have dropped the struct keyword. Some introspection code
set ifhandle [::tcom::info interface $::characterizer(AnalogIn)]
puts "methods: [$ifhandle methods]"
returns
methods: ... {4 VOID GetAdcScaleRange {{out {SourceScaleRange *} scaleRange}}} ...
I don't know if this is meaningful or not.
At this point, I'd be happy to get any ideas on where to look next. Is this a known TCOM limitation (undocumented, but known)? Is there a way to pre-process the parameter into an appropriate format using tcom? Do I need to force it into a correctly sized block of memory via binary format by manual construction? Do I need to take the DLL back to the original developer and have him pull out all the struct parameters? (Not likely to happen, in this reality.) Any input is good input.
I have an existing project, originally implemented as a Vxworks 5.5 style kernel module.
This project creates many tasks that act as a "host" to run external code. We do something like this:
void loadAndRun(char* file, char* function)
{
//load the module
int fd = open (file, O_RDONLY,0644);
loadModule(fdx, LOAD_ALL_SYMBOLS);
SYM_TYPE type;
FUNCPTR func;
symFindByName(sysSymTbl, &function , (char**) &func, &type);
while (true)
{
func();
}
}
This all works a dream, however, the functions that get called are non-reentrant, with global data all over the place etc. We have a new requirement to be able to run multiple instances of these external modules, and my obvious first thought is to use vxworks RTP to provide memory isolation.
However, no matter what I try, I cannot persuade my new RTP project to compile and link.
error: 'sysSymTbl' undeclared (first use in this function)
If I add the correct include:
#include <sysSymTbl.h>
I get:
error: sysSymTbl.h: No such file or directory
and if i just define it extern:
extern SYMTAB_ID sysSymTbl;
i get:
error: undefined reference to `sysSymTbl'
I havent even begun to start trying to stitch in the actual module load code, at the moment I just want to get the symbol lookup working.
So, is the system symbol table accessible from VxWorks RTP applications? Can moduleLoad be used?
EDIT
It appears that what I am trying to do is covered by the Application Programmers Guide in the section on Plugins (section 4.9 for V6.8) (thanks #nos), which is to use dlopen() etc. Like this:
void * hdl= dlopen("pathname",RTLD_NOW);
FUNCPTR func = dlsym(hdl,"FunctionName");
func();
However, i still end up in linker-hell, even when i specify -Xbind-lazy -non-static to the compiler.
undefined reference to `_rtld_dlopen'
undefined reference to `_rtld_dlsym'
The problem here was that the documentation says to specify -Xbind-lazy and -non-static as compiler options. However, these should actually be added to the linker options.
libc.so.1 for the appropriate build target is then required on the target to satisfy the run-time link requirements.
There is a "NameAndType" structure in the constants pool in .class file.
It is used for dynamic binding.
All methods that class can "export" described as "signature + return type".
Like
"getVector()Ljava/util/Vector;"
That breakes my code when return type of the method in some .jar is changed, even if new type is narrower.
i.e:
I have the following code:
List l = some.getList();
External .jar contains:
public List getList()
Than external jar changes method signature to
public ArrayList getList().
And my code dies in run-time with NoSuchMethodException, because it can't find
getList()Ljava/util/List;
So, I have to recompile my code.
I do not have to change it. Just recompile absolutely the same code!
That also gives ability to have two methods with one signature, but different return types! Compiler would not accept it, but it is possible to do it via direct opcoding.
My questions is why?
Why they did it?
I have only one idea: to prevent sophisticated type checking in the runtime.
You need to look up to the hierarchy and check if there is a parent with List interface.
It takes time, and only compiler has it. JVM does not.
Am I right?
thanks.
One reason may be because method overloading (as opposed to overriding) is determined at compile time. Consider the following methods:
public void doSomething(List util) {}
public void doSomething(ArrayList util) {}
And consider code:
doSomething(getList());
If Java allowed the return type to change and did not throw an exception, the method called would still be doSomething(List) until you recompiled - then it would be doSomething(ArrayList). Which would mean that working code would change behavior just for having recompiled it.
I'm trying to build a small program that hosts vst effects and I would like to scan a folder for plugin dlls.
I know how to find all the dlls but now I have the following questions:
What is the best way to determine if a given dll is a vst plugin?
I tried to just see if the ddl exports the proper function and this works fine for plugins made with the more recent versions of the vst sdk since it exports a method called "VstPluginMain" but older versions export a rather generic "main" function.
How do I determine if the plugin is an effect or an instrument?
How do I scan vst shell plugins?
Shell plugins are basically dlls that somehow contain multiple effects. An example of this are the plugins made by Waves Audio http://www.waves.com/
ps: If there is a library that can do all of this for me please let me know.
How to determine a VST plugin?
Once you've found main/VSTPluginMain... call it!
If what's returned is NULL, it's not a VST.
If what's returned is a pointer to the bytes "VstP" (see VstInt32 magic; ///< must be #kEffectMagic ('VstP') in aeffect.h), then you have a VST.
The VSTPluginMain returns a pointer to an AEffect structure. You will need to look at this structure.
Effect or instrument? AEffect::flags | (effFlagsIsSynth = 1 << 8)
Shell VSTs are more complex:
Category will be kPlugCategShell
Support the "shellCategory" canDo.
Use effShellGetNextPlugin to enumerate.
To instance, respond to audioMasterCurrentId in your callback with the ID you want.
#Dave Gamble nailed it, but I wanted to add a few things on VST shell plugins, since they are a bit tricky to work with.
To determine if a VST is a shell plugin, send the effGetPlugCategory opcode to the plugin dispatcher. If it returns kPlugCategShell, then it's a shell plugin. To get the list of sub-plugins in the shell, you basically call effShellGetNextPlugin until it returns 0. Example code snippit (adapted from a working VST host):
// All this stuff should probably be set up far earlier in your code...
// This assumes that you have already opened the plugin and called VSTPluginMain()
typedef VstIntPtr (*Vst2xPluginDispatcherFunc)(AEffect *effect, VstInt32 opCode, VstInt32 index, VstIntPtr value, void *ptr, float opt);
Vst2xPluginDispatcherFunc dispatcher;
AEffect* plugin;
char nameBuffer[40];
while(true) {
memset(nameBuffer, 0, 40);
VstInt32 shellPluginId = dispatcher(pluginHandle, effShellGetNextPlugin, 0, 0, nameBuffer, 0.0f);
if(shellPluginId == 0 || nameBuffer[0] == '\0') {
break;
}
else {
// Do something with the name and ID
}
}
If you actually want to load a plugin in a VST shell, it's a bit trickier. First, your host needs to handle the audioMasterCurrentId opcode in the host callback. When you call the VST's VSTPluginMain() method to instantiate the plugin, it will call the host callback with this opcode and ask for the unique ID which should be loaded.
Because this callback is made before the main function returns (and hence, before it delivers an AEffect* to your host), that means that you probably will need to store the shell plugin ID to load in a global variable, since you will not be able to save a pointer to any meaningful data in void* user field of the AEffect struct in time for it to be passed back to you in the host callback.
If you want to develop your VST Host application in .NET take a look at VST.NET