I read this article and try to do the exercise in D Programming Language, but encounter a problem in the first exercise.
(1) Display series of numbers
(1,2,3,4, 5....etc) in an infinite
loop. The program should quit if
someone hits a specific key (Say
ESCAPE key).
Of course the infinite loop is not a big problem, but the rest is. How could I grab a key hit in D/Tango? In tango FAQ it says use C function kbhit() or get(), but as I know, these are not in C standard library, and does not exist in glibc which come with my Linux machine which I use to programming.
I know I can use some 3rd party library like ncurses, but it has same problem just like kbhit() or get(), it is not standard library in C or D and not pre-installed on Windows. What I hope is that I could done this exercise use just D/Tango and could run it on both Linux and Windows machine.
How could I do it?
Here's how you do it in the D programming language:
import std.c.stdio;
import std.c.linux.termios;
termios ostate; /* saved tty state */
termios nstate; /* values for editor mode */
// Open stdin in raw mode
/* Adjust output channel */
tcgetattr(1, &ostate); /* save old state */
tcgetattr(1, &nstate); /* get base of new state */
cfmakeraw(&nstate);
tcsetattr(1, TCSADRAIN, &nstate); /* set mode */
// Read characters in raw mode
c = fgetc(stdin);
// Close
tcsetattr(1, TCSADRAIN, &ostate); // return to original mode
kbhit is indeed not part of any standard C interfaces, but can be found in conio.h.
However, you should be able to use getc/getchar from tango.stdc.stdio - I changed the FAQ you mention to reflect this.
D generally has all the C stdlib available (Tango or Phobos) so answers to this question for GNU C should work in D as well.
If tango doesn't have the needed function, generating the bindings is easy. (Take a look at CPP to cut through any macro junk.)
Thanks for both of your replies.
Unfortunately, my main development environment is Linux + GDC + Tango, so I don't have conio.h, since I don't use DMC as my C compiler.
And I also found both getc() and getchar() is also line buffered in my development environment, so it could not achieve what I wish I could do.
In the end, I've done this exercise by using GNU ncurses library. Since D could interface C library directly, so it does not take much effort. I just declare the function prototype that I used in my program, call these function and linking my program against ncurses library directly.
It works perfectly on my Linux machine, but I still not figure out how could I do this without any 3rd party library and could run on both Linux and Windows yet.
import tango.io.Stdout;
import tango.core.Thread;
// Prototype for used ncurses library function.
extern(C)
{
void * initscr();
int cbreak ();
int getch();
int endwin();
int noecho();
}
// A keyboard handler to quit the program when user hit ESC key.
void keyboardHandler ()
{
initscr();
cbreak();
noecho();
while (getch() != 27) {
}
endwin();
}
// Main Program
void main ()
{
Thread handler = new Thread (&keyboardHandler);
handler.start();
for (int i = 0; ; i++) {
Stdout.format ("{}\r\n", i).flush;
// If keyboardHandler is not ruuning, it means user hits
// ESC key, so we break the infinite loop.
if (handler.isRunning == false) {
break;
}
}
return 0;
}
As Lars pointed out, you can use _kbhit and _getch defined in conio.h and implemented in (I believe) msvcrt for Windows. Here's an article with C++ code for using _kbhit and _getch.
Related
I'm looking for documentation/information on how to share information/code between multiple binaries compiled for a Cortex-m/0/4/7 architectures. The two binaries will be on the same chip and same architecture. They are flashed at different locations and sets the main stack pointer and resets the program counter so that one binary "jumps" to the other binary. I want to share code between these two binaries.
I've done a simple copy of an array of function pointers into a section defined in the linker script into RAM. Then read the RAM out in the other binary and cast it to an array then use the index to call functions in the other binary. This does work as a Proof-of-concept, but I think what I'm looking for is a bit more complex. As I want some way of describing compatibility between the two binaries. I want some what the functionality of shared libraries, but I'm unsure if I need position independent code.
As an example how the current copy process is done it is basically:
Source binary:
void copy_func()
{
memncpy(array_of_function_pointers, fixed_size, address_custom_ram_section)
}
Binary which is jumped too from source binary:
array_fp_type get_funcs()
{
memncpy(adress_custom_ram_section, fixed_size, array_of_fp)
return array_of_fp;
}
Then I can use the array_of_fp to call into functions residing in the source binary from the jump binary.
So what I'm looking for is some resources or input for someone who have implemented a similar system. Like I would like to not have to have a custom RAM section where I'm copying the function pointers into.
I would be fine with having the compilation step of source binary outputting something which can be included into the compilation step of the jump binary. However it needs to be reproducible and recompiling the source binary shouldn't break the compatibility with the jump binary(even if it included a different file from what is now outputted) as long as you don't change the interface.
To clarify source binary shouldn't require any specific knowledge about the jump binary. The code should not reside in both binaries as this would defeat the purpose of this mechanism. The overall goal if this mechanism is a way to save space when creating multi-binary applications on cortex-m processors.
Any ideas or links to resources are welcome. If you have any more questions feel free to comment on the question and I'll try to answer it.
Its very hard for me to picture what you want to do, but if you're interested in having an application link against your bootloader/ROM, then see Loading symbol file while linking for a hint on what you could do.
Build your "source"(?) image, scrape its mapfile and make a symbol file, then use that when you link your "jump"(?) image.
This does mean you need to link your "jump" image against a specific version of your "source" image.
If you need them to be semi-version independent (i.e. you define a set of functions that get exported, but you can rebuild on either side), then you need to export function pointers at known locations in your "source" image and link against those function pointers in your "jump" image. You can simplify the bookkeeping by making a structure of function pointers access the functions through that on either side.
For example:
shared_functions.h:
struct FunctionPointerTable
{
void(*function1)(int);
void(*function2)(char);
};
extern struct FunctionPointerTable sharedFunctions;
Source file in "source" image:
void function1Implementation(int a)
{
printf("You sent me an integer: %d\r\n", a);
function2Implementation((char)(a%256))
sharedFunctions.function2((char)(a%256));
}
void function2Implementation(char b)
{
printf("You sent me an char: %c\r\n", b);
}
struct FunctionPointerTable sharedFunctions =
{
function1Implementation,
function2Implementation,
};
Source file in "jump" image:
#include "shared_functions.h"
sharedFunctions.function1(1024);
sharedFunctions.function2(100);
When you compile/link the "source", take its mapfile and extract the location of sharedFunctions and create a symbol file that is linked with the source the "jump" image.
Note: the printfs (or anything directly called by the shared functions) would come from the "source" image (and not the "jump" image).
If you need them to come from the "jump" image (or be overridable) , then you need to access them through the same function pointer table, and the "jump" image needs to fix the function pointer table up with its version of the relevant function. I updated the function1() to show this. The direct call to function2 will always be the "source" version. The shared function call version of it will go through the jump table and call the "source" version unless the "jump" image updates the function table to point to its implementation.
You CAN get away from the structure, but then you need to export the function pointers one by one (not a big problem), but you want to keep them in order and at a fixed location, which means explicitly putting them in the linker descriptor file, etc. etc. I showed the structure method to distill it down to the easiest example.
As you can see, things get pretty hairy, and there is some penalty (calling through the function pointer is slower because you need to load up the address to jump to)
As explained in comment, we could imagine an application and a bootloader relying on same dynamic library. So application and bootloader rely on library, application can be changed without impact on library or boot.
I did not find an easy way to do a shared library with arm-none-eabi-gcc. However
this document gives some alternatives to shared libraries. I your case, I would recommand the jump table solution.
Write a library with the functions that need to be used in bootloader and in applicative.
"library" code
typedef void (*genericFunctionPointer)(void)
// use the linker script to set MySection at a known address
// I think this could be a structure like Russ Schultz solution but struct may or may not compile identically in lib and boot. However yes struct would be much easyer and avoiding many function pointer cast.
const genericFunctionPointer FpointerArray[] __attribute__ ((section ("MySection")))=
{
(genericFunctionPointer)lib_f1,
(genericFunctionPointer)lib_f2,
}
void lib_f1(void)
{
//some code
}
uint8_t lib_f2(uint8_t param)
{
//some code
}
applicative and/or bootloader code
typedef void (*genericFunctionPointer)(void)
// Use the linker script to set MySection at same address as library was compiled
// in linker script also put this section as `NOLOAD` because it is init by library and not by our code
//volatile is needed here because you read in flash memory and compiler may initialyse usage of this array to NULL pointers
volatile const genericFunctionPointer FpointerArray[NB_F] __attribute__ ((section ("MySection")));
enum
{
lib_f1,
lib_f2,
NB_F,
}
int main(void)
{
(correctCastF1)(FpointerArray[lib_f1])();
uint8_t a = (correctCastF2)(FpointerArray[lib_f2])(10);
}
You can look into using linker sections. If you have your bootloader source code in folder bootloader, you can use
SECTIONS
{
.bootloader:
{
build_output/bootloader/*.o(.text)
} >flash_region1
.binary1:
{
build_output/binary1/*.o(.text)
} >flash_region2
.binary2:
{
build_output/binary2/*.o(.text)
} >flash_region3
}
I have a C++/CLI method, ManagedMethod, with one output argument that will be modified by a native method as such:
// file: test.cpp
#pragma unmanaged
void NativeMethod(int& n)
{
n = 123;
}
#pragma managed
void ManagedMethod([System::Runtime::InteropServices::Out] int% n)
{
pin_ptr<int> pinned = &n;
NativeMethod(*pinned);
}
void main()
{
int n = 0;
ManagedMethod(n);
// n is now modified
}
Once ManagedMethod returns, the value of n has been modified as I would expect. So far, the only way I've been able to get this to compile is to use a pin_ptr inside ManagedMethod, so is pinning in fact the correct/only way to do this? Or is there a more elegant way of passing n to NativeMethod?
Yes, this is the correct way to do it. Very highly optimized inside the CLR, the variable gets the [pinned] attribute so the CLR knows that it stores an interior pointer to an object that should not be moved. Distinct from GCHandle::Alloc(), pin_ptr<> can do it without creating another handle. It is reported in the table that the jitter generates when it compiles the method, the GC uses that table to know where to look for object roots.
Which only ever matters when a garbage collection occurs at the exact same time that NativeMethod() is running. Doesn't happen very often in practice, you'd have to use threads in the program. YMMV.
There is another way to do it, doesn't require pinning but requires a wee bit more machine code:
void ManagedMethod(int% n)
{
int copy = n;
NativeMethod(copy);
n = copy;
}
Which works because local variables have stack storage and thus won't be moved by the garbage collector. Does not win any elegance points for style but what I normally use myself, estimating the side-effects of pinning is not that easy. But, really, don't fear pin_ptr<>.
Is it possible to write a program that can execute lua scripts just by using the lua52.dll file?
Or do I have to create a new C project and use all these header and source files?
I just want to create a few global variables and functions and make them available in the lua scripts that should be executed.
So in theory:
LoadDll("lua52.dll")
StartLua()
AddFunctionToLua("MyFunction1")
AddFunctionToLua("MyFunction2")
AddVariableToLua("MyVariable1")
...
ExecuteLuaScript("C:\myScript.lua")
CloseLua()
The standard command line interpreter for Lua is an example of just such a program. On windows, it is a small executable that is linked to lua52.dll. Its source is, of course, part of the Lua distribution.
Despite being located in the same folder as the sources to the Lua DLL, lua.c only references the public API for Lua, and depends only on the four public header files and the DLL itself.
An even simpler example that embeds a Lua interpreter in a C program is the following, derived from the example shown in the PiL book available online:
#include <stdio.h>
#include <string.h>
#include <lua.h>
#include <lauxlib.h>
#include <lualib.h>
int main (void) {
char buff[256];
int error;
lua_State *L = luaL_newstate(); /* create state */
luaL_openlibs(L); /* open standard libraries */
while (fgets(buff, sizeof(buff), stdin) != NULL) {
error = luaL_loadbuffer(L, buff, strlen(buff), "line") ||
lua_pcall(L, 0, 0, 0);
if (error) {
fprintf(stderr, "%s", lua_tostring(L, -1));
lua_pop(L, 1); /* pop error message from the stack */
}
}
lua_close(L);
return 0;
}
In your existing application, you would need to call luaL_newstate() once and store the returned handle. Along with a call to luaL_openlibs(), you would likely want to also define one or more Lua modules representing your application's scriptable API. And of course, you need to call lua_close() sometime before exiting so that Lua has a chance to clean up its objects and in particular a chance to deal with any objects that the script authors are depending on to get resources released when the application exits.
With that in place, you generally provide a way to load script fragments provided by your user using luaL_loadbuffer() or any of several other functions built on top of lua_load(). Loading a script compiles it and leaves an anonymous function on the top of the stack that when called will execute all top-level statements in the script.
For a lot more discussion of this, see the chapters of Programming in Lua (an older addition is available online) that relate to the C API.
LoadDll("lua52.dll")
StartLua()
AddFunctionToLua("MyFunction1")
AddFunctionToLua("MyFunction2")
AddVariableToLua("MyVariable1")
...
ExecuteLuaScript("C:\myScript.lua")
CloseLua()
What language is the above written in? What application is running it? If this is a Lua script, then "AddFunctionToLua" is simply function name() end. If this is C, then you've already got a C project, no need to "create a new C project". So it's unclear what you're asking.
I have recently heard of compiling C++ code to javascript using emscripten and how, if asmjs optimizations are done, it has the potential of running applications really fast.
I have read several post, tutorial and even heard some very interesting youtube videos. I have also run the hello world example successfully.
However, I don't know the full capabilities of this approach, specially if an entire new webapp can/should be written in C++ as a whole, without glue code.
More concretely I would like to write something similar to the following C++ (as a reference not working code).
#include <window>
class ApplicationLogic : public DOMListener{
private:
int num;
public:
ApplicationLogic():num(0);
virtual void onClickEvent(DOMEventData event){
num++;
}
virtual ~ApplicationLogic(){}
}
int main(){
DOMElement but = Window.getElementById("foo");
ApplicationLogic app();
but.setOnclick(app);
}
I hope it makes clear the idea, but the goal is to achieve something similar to:
A static function that initializes the module run when the window is ready (same behaviour that gives jquery.ready()). So listeners can be added to DOM elements.
A way to interact with the DOM directly from C/C++, hence the #include <window>, basically access to the DOM and other elements like JSON, Navigator and such.
I keep thinking of Lua and how when the lua script includes a shared object (dynamic linked library) it searched for a initialize function in that .so file, and there one would register the functions available from outside the module, just exactly how the return of the function module created in asmjs acts. But I can't figure out how to emulate jquery.ready directly with C++.
As you can see I have little knowledge about asmjs, but I haven't found tutorials or similar for what I'm looking for, I have read references to standard libraries included at compile time for stdlibc, stdlibc++ and SDL, but no reference on how to manipulate the DOM from the C++ source.
what's up. I know this is an old topic, but I'm posting here in case anyone else comes here looking for the answer to this question (like I did).
Technically, yes it is possible - but with a ton of what you called "glue code", and also a good bit of JavaScript (which kind of defeats the purpose IMO). For example:
#include <emscripten.h>
#include <string>
#define DIV 0
#define SPAN 1
#define INPUT 2
// etc. etc. etc. for every element you want to use
// Creates an element of the given type (see #defines above)
// and returns the element's ID
int RegisterElement(int type)
{
return EM_ASM_INT({
var i = 0;
while (document.getElementById(i))
i++;
var t;
if ($0 == 0) t = "div";
else if ($0 == 1) t = "span";
else if ($0 == 2) t = "input";
else
t = "span";
var test = document.createElement(t);
test.id = i;
document.body.appendChild(test);
return i;
}, type);
}
// Calls document.getElementById(ID).innerHTML = text
void SetText(int ID, const char * text)
{
char str[500];
strcpy(str, "document.getElementById('");
char id[1];
sprintf(id, "%d", ID);
strcat(str, id);
strcat(str, "').innerHTML = '");
strcat(str, text);
strcat(str, "';");
emscripten_run_script(str);
}
// And finally we get to our main entry point...
int main()
{
RegisterElement(DIV); // Creates an empty div, just as an example
int test = RegisterElement(SPAN); Creates an empty SPAN, test = its ID
SetText(test, "Testing, 1-2-3"); Set the span's inner HTML
return 0; And we're done
}
I had the same question and came up with this solution, and it compiled and worked as expected. But we're basically building a C/C++ API just to do what JavaScript already does "out of the box". Don't get me wrong - from a language standpoint I'd take C++ over JavaScript any day - but I can't help but think it's not worth the development time and possible performance issues involved in a setup like this. If I were going to do a web app in C++, I would definitely use Cheerp (the new name for Duetto).
As somebody pointed out already, if you start of with a fresh codebase exclusively for the web, then duetto could be a solution. But in my opinion duetto has many drawbacks, like no C allocators, which would probably make it very hard if you want to use 3rd party libraries.
If you are using emscripten, it provides an API for all kinds of DOM events, which does pretty much exactly what you want.
emscripten_set_click_callback(const char *target, void *userData, int useCapture, int (*func)(int eventType, const EmscriptenMouseEvent *mouseEvent, void *userData));
hope this helps
I'm trying to build a small program that hosts vst effects and I would like to scan a folder for plugin dlls.
I know how to find all the dlls but now I have the following questions:
What is the best way to determine if a given dll is a vst plugin?
I tried to just see if the ddl exports the proper function and this works fine for plugins made with the more recent versions of the vst sdk since it exports a method called "VstPluginMain" but older versions export a rather generic "main" function.
How do I determine if the plugin is an effect or an instrument?
How do I scan vst shell plugins?
Shell plugins are basically dlls that somehow contain multiple effects. An example of this are the plugins made by Waves Audio http://www.waves.com/
ps: If there is a library that can do all of this for me please let me know.
How to determine a VST plugin?
Once you've found main/VSTPluginMain... call it!
If what's returned is NULL, it's not a VST.
If what's returned is a pointer to the bytes "VstP" (see VstInt32 magic; ///< must be #kEffectMagic ('VstP') in aeffect.h), then you have a VST.
The VSTPluginMain returns a pointer to an AEffect structure. You will need to look at this structure.
Effect or instrument? AEffect::flags | (effFlagsIsSynth = 1 << 8)
Shell VSTs are more complex:
Category will be kPlugCategShell
Support the "shellCategory" canDo.
Use effShellGetNextPlugin to enumerate.
To instance, respond to audioMasterCurrentId in your callback with the ID you want.
#Dave Gamble nailed it, but I wanted to add a few things on VST shell plugins, since they are a bit tricky to work with.
To determine if a VST is a shell plugin, send the effGetPlugCategory opcode to the plugin dispatcher. If it returns kPlugCategShell, then it's a shell plugin. To get the list of sub-plugins in the shell, you basically call effShellGetNextPlugin until it returns 0. Example code snippit (adapted from a working VST host):
// All this stuff should probably be set up far earlier in your code...
// This assumes that you have already opened the plugin and called VSTPluginMain()
typedef VstIntPtr (*Vst2xPluginDispatcherFunc)(AEffect *effect, VstInt32 opCode, VstInt32 index, VstIntPtr value, void *ptr, float opt);
Vst2xPluginDispatcherFunc dispatcher;
AEffect* plugin;
char nameBuffer[40];
while(true) {
memset(nameBuffer, 0, 40);
VstInt32 shellPluginId = dispatcher(pluginHandle, effShellGetNextPlugin, 0, 0, nameBuffer, 0.0f);
if(shellPluginId == 0 || nameBuffer[0] == '\0') {
break;
}
else {
// Do something with the name and ID
}
}
If you actually want to load a plugin in a VST shell, it's a bit trickier. First, your host needs to handle the audioMasterCurrentId opcode in the host callback. When you call the VST's VSTPluginMain() method to instantiate the plugin, it will call the host callback with this opcode and ask for the unique ID which should be loaded.
Because this callback is made before the main function returns (and hence, before it delivers an AEffect* to your host), that means that you probably will need to store the shell plugin ID to load in a global variable, since you will not be able to save a pointer to any meaningful data in void* user field of the AEffect struct in time for it to be passed back to you in the host callback.
If you want to develop your VST Host application in .NET take a look at VST.NET