Convert FlatBuffer to JSON from various languages - flatbuffers

Does FlatBuffer allow to convert a binary fbs file to and from JSON (of course the schema will be known)?
My idea is to define the schema of structures for a pipe&filter architecture in FlatBuffer. The FlatBuffer files will also be exchanged between pipes. However, some tools within some of the filters will require me to pass plain old json objects, converted from the FlatBuffer files. And I have several languages to support (C++, Python, Java, JS).
I've found a javascript library which seems to do this:
https://github.com/evanw/node-flatbuffers/
But it seems abdondened and I'm rather interested in officially supported ways.

Only C++ provides this functionality out of the box.
For other languages, you can wrap the C++ parser/generator, and call it (see e.g. for Java: http://frogermcs.github.io/json-parsing-with-flatbuffers-in-android/).
#evanw is the original author of the JS port in FlatBuffers, so the project you mention may be usable, but I don't think he's actively maintaining it anymore.
Alternatively, if this runs on a server and you can run command-line utilities, you can use the flatc binary to do the conversion for you via a file.
Ideally, all languages would have their own native parser, but that is a lot of work to duplicate. While interfacing with C/C++ is a pain, it has the advantage of giving you a really fast parser.

It is easy to convert flatbuffer buffer to JSON using Flat C version (FlatCC).
Please refer the sample tests in flatcc source path: flatcc-master/test/json_test.
Generate required json helper headers files using:
flatcc_d -a --json <yourData.fbs>
It will generate yourData_json_printer.h. Include this header file in your program.
Modify the below code to fit <yourData>. buffer is your flatbuffer input received from other end.
Also do not use sizeof() to get bufferSize from buffer for flatbuffer. Print the buffersize before calling this
function
void flatbufToJson(const char *buffer, size_t bufferSize) {
flatcc_json_printer_t ctx_obj, *ctx;
FILE *fp = 0;
const char *target_filename = "yourData.json";
ctx = &ctx_obj;
fp = fopen(target_filename, "wb");
if (!fp) {
fprintf(stderr, "%s: could not open output file\n", target_filename);
printf("ctx not ready for clenaup, so exit directly\n");
return;
}
flatcc_json_printer_init(ctx, fp);
flatcc_json_printer_set_force_default(ctx, 1);
/* Uses same formatting as golden reference file. */
flatcc_json_printer_set_nonstrict(ctx);
//Check and modify here...
//the following should be re-written based on your fbs file and generated header file.
<yourData>_print_json(ctx, buffer, bufferSize);
flatcc_json_printer_flush(ctx);
if (flatcc_json_printer_get_error(ctx)) {
printf("could not print data\n");
}
fclose(fp);
printf("######### Json is done: \n");
}

Related

Code sharing between multiple independently compiled binaries/hex files

I'm looking for documentation/information on how to share information/code between multiple binaries compiled for a Cortex-m/0/4/7 architectures. The two binaries will be on the same chip and same architecture. They are flashed at different locations and sets the main stack pointer and resets the program counter so that one binary "jumps" to the other binary. I want to share code between these two binaries.
I've done a simple copy of an array of function pointers into a section defined in the linker script into RAM. Then read the RAM out in the other binary and cast it to an array then use the index to call functions in the other binary. This does work as a Proof-of-concept, but I think what I'm looking for is a bit more complex. As I want some way of describing compatibility between the two binaries. I want some what the functionality of shared libraries, but I'm unsure if I need position independent code.
As an example how the current copy process is done it is basically:
Source binary:
void copy_func()
{
memncpy(array_of_function_pointers, fixed_size, address_custom_ram_section)
}
Binary which is jumped too from source binary:
array_fp_type get_funcs()
{
memncpy(adress_custom_ram_section, fixed_size, array_of_fp)
return array_of_fp;
}
Then I can use the array_of_fp to call into functions residing in the source binary from the jump binary.
So what I'm looking for is some resources or input for someone who have implemented a similar system. Like I would like to not have to have a custom RAM section where I'm copying the function pointers into.
I would be fine with having the compilation step of source binary outputting something which can be included into the compilation step of the jump binary. However it needs to be reproducible and recompiling the source binary shouldn't break the compatibility with the jump binary(even if it included a different file from what is now outputted) as long as you don't change the interface.
To clarify source binary shouldn't require any specific knowledge about the jump binary. The code should not reside in both binaries as this would defeat the purpose of this mechanism. The overall goal if this mechanism is a way to save space when creating multi-binary applications on cortex-m processors.
Any ideas or links to resources are welcome. If you have any more questions feel free to comment on the question and I'll try to answer it.
Its very hard for me to picture what you want to do, but if you're interested in having an application link against your bootloader/ROM, then see Loading symbol file while linking for a hint on what you could do.
Build your "source"(?) image, scrape its mapfile and make a symbol file, then use that when you link your "jump"(?) image.
This does mean you need to link your "jump" image against a specific version of your "source" image.
If you need them to be semi-version independent (i.e. you define a set of functions that get exported, but you can rebuild on either side), then you need to export function pointers at known locations in your "source" image and link against those function pointers in your "jump" image. You can simplify the bookkeeping by making a structure of function pointers access the functions through that on either side.
For example:
shared_functions.h:
struct FunctionPointerTable
{
void(*function1)(int);
void(*function2)(char);
};
extern struct FunctionPointerTable sharedFunctions;
Source file in "source" image:
void function1Implementation(int a)
{
printf("You sent me an integer: %d\r\n", a);
function2Implementation((char)(a%256))
sharedFunctions.function2((char)(a%256));
}
void function2Implementation(char b)
{
printf("You sent me an char: %c\r\n", b);
}
struct FunctionPointerTable sharedFunctions =
{
function1Implementation,
function2Implementation,
};
Source file in "jump" image:
#include "shared_functions.h"
sharedFunctions.function1(1024);
sharedFunctions.function2(100);
When you compile/link the "source", take its mapfile and extract the location of sharedFunctions and create a symbol file that is linked with the source the "jump" image.
Note: the printfs (or anything directly called by the shared functions) would come from the "source" image (and not the "jump" image).
If you need them to come from the "jump" image (or be overridable) , then you need to access them through the same function pointer table, and the "jump" image needs to fix the function pointer table up with its version of the relevant function. I updated the function1() to show this. The direct call to function2 will always be the "source" version. The shared function call version of it will go through the jump table and call the "source" version unless the "jump" image updates the function table to point to its implementation.
You CAN get away from the structure, but then you need to export the function pointers one by one (not a big problem), but you want to keep them in order and at a fixed location, which means explicitly putting them in the linker descriptor file, etc. etc. I showed the structure method to distill it down to the easiest example.
As you can see, things get pretty hairy, and there is some penalty (calling through the function pointer is slower because you need to load up the address to jump to)
As explained in comment, we could imagine an application and a bootloader relying on same dynamic library. So application and bootloader rely on library, application can be changed without impact on library or boot.
I did not find an easy way to do a shared library with arm-none-eabi-gcc. However
this document gives some alternatives to shared libraries. I your case, I would recommand the jump table solution.
Write a library with the functions that need to be used in bootloader and in applicative.
"library" code
typedef void (*genericFunctionPointer)(void)
// use the linker script to set MySection at a known address
// I think this could be a structure like Russ Schultz solution but struct may or may not compile identically in lib and boot. However yes struct would be much easyer and avoiding many function pointer cast.
const genericFunctionPointer FpointerArray[] __attribute__ ((section ("MySection")))=
{
(genericFunctionPointer)lib_f1,
(genericFunctionPointer)lib_f2,
}
void lib_f1(void)
{
//some code
}
uint8_t lib_f2(uint8_t param)
{
//some code
}
applicative and/or bootloader code
typedef void (*genericFunctionPointer)(void)
// Use the linker script to set MySection at same address as library was compiled
// in linker script also put this section as `NOLOAD` because it is init by library and not by our code
//volatile is needed here because you read in flash memory and compiler may initialyse usage of this array to NULL pointers
volatile const genericFunctionPointer FpointerArray[NB_F] __attribute__ ((section ("MySection")));
enum
{
lib_f1,
lib_f2,
NB_F,
}
int main(void)
{
(correctCastF1)(FpointerArray[lib_f1])();
uint8_t a = (correctCastF2)(FpointerArray[lib_f2])(10);
}
You can look into using linker sections. If you have your bootloader source code in folder bootloader, you can use
SECTIONS
{
.bootloader:
{
build_output/bootloader/*.o(.text)
} >flash_region1
.binary1:
{
build_output/binary1/*.o(.text)
} >flash_region2
.binary2:
{
build_output/binary2/*.o(.text)
} >flash_region3
}

How to pass a struct parameter using TCOM in Tcl

I've inherited a piece of custom test equipment with a control library built in a COM object, and I'm trying to connect it to our Tcl test script library. I can connect to the DLL using TCOM, and do some simple control operations with single int parameters. However, certain features are controlled by passing in a C/C++ struct that contains the control blocks, and attempting to use them in TCOM is giving me an error 0x80020005 {Type mismatch.}. The struct is defined in the .idl file, so it's available to TCOM to use.
The simplest example is a particular call as follows:
C++ .idl file:
struct SourceScaleRange
{
float MinVoltage;
float MaxVoltage;
};
interface IAnalogIn : IDispatch{
...
[id(4), helpstring("method GetAdcScaleRange")] HRESULT GetAdcScaleRange(
[out] struct SourceScaleRange *scaleRange);
...
}
Tcl wrapper:
::tcom::import [file join $::libDir "PulseMeas.tlb"] ::char
set ::characterizer(AnalogIn) [::char::AnalogIn]
set scaleRange ""
set response [$::characterizer(AnalogIn) GetAdcScaleRange scaleRange]
Resulting error:
0x80020005 {Type mismatch.}
while executing
"$::characterizer(AnalogIn) GetAdcScaleRange scaleRange"
(procedure "charGetAdcScaleRange" line 4)
When I dump TCOM's methods, it knows of the name of the struct, at least, but it seems to have dropped the struct keyword. Some introspection code
set ifhandle [::tcom::info interface $::characterizer(AnalogIn)]
puts "methods: [$ifhandle methods]"
returns
methods: ... {4 VOID GetAdcScaleRange {{out {SourceScaleRange *} scaleRange}}} ...
I don't know if this is meaningful or not.
At this point, I'd be happy to get any ideas on where to look next. Is this a known TCOM limitation (undocumented, but known)? Is there a way to pre-process the parameter into an appropriate format using tcom? Do I need to force it into a correctly sized block of memory via binary format by manual construction? Do I need to take the DLL back to the original developer and have him pull out all the struct parameters? (Not likely to happen, in this reality.) Any input is good input.

Implement lua scripting through dll calls?

Is it possible to write a program that can execute lua scripts just by using the lua52.dll file?
Or do I have to create a new C project and use all these header and source files?
I just want to create a few global variables and functions and make them available in the lua scripts that should be executed.
So in theory:
LoadDll("lua52.dll")
StartLua()
AddFunctionToLua("MyFunction1")
AddFunctionToLua("MyFunction2")
AddVariableToLua("MyVariable1")
...
ExecuteLuaScript("C:\myScript.lua")
CloseLua()
The standard command line interpreter for Lua is an example of just such a program. On windows, it is a small executable that is linked to lua52.dll. Its source is, of course, part of the Lua distribution.
Despite being located in the same folder as the sources to the Lua DLL, lua.c only references the public API for Lua, and depends only on the four public header files and the DLL itself.
An even simpler example that embeds a Lua interpreter in a C program is the following, derived from the example shown in the PiL book available online:
#include <stdio.h>
#include <string.h>
#include <lua.h>
#include <lauxlib.h>
#include <lualib.h>
int main (void) {
char buff[256];
int error;
lua_State *L = luaL_newstate(); /* create state */
luaL_openlibs(L); /* open standard libraries */
while (fgets(buff, sizeof(buff), stdin) != NULL) {
error = luaL_loadbuffer(L, buff, strlen(buff), "line") ||
lua_pcall(L, 0, 0, 0);
if (error) {
fprintf(stderr, "%s", lua_tostring(L, -1));
lua_pop(L, 1); /* pop error message from the stack */
}
}
lua_close(L);
return 0;
}
In your existing application, you would need to call luaL_newstate() once and store the returned handle. Along with a call to luaL_openlibs(), you would likely want to also define one or more Lua modules representing your application's scriptable API. And of course, you need to call lua_close() sometime before exiting so that Lua has a chance to clean up its objects and in particular a chance to deal with any objects that the script authors are depending on to get resources released when the application exits.
With that in place, you generally provide a way to load script fragments provided by your user using luaL_loadbuffer() or any of several other functions built on top of lua_load(). Loading a script compiles it and leaves an anonymous function on the top of the stack that when called will execute all top-level statements in the script.
For a lot more discussion of this, see the chapters of Programming in Lua (an older addition is available online) that relate to the C API.
LoadDll("lua52.dll")
StartLua()
AddFunctionToLua("MyFunction1")
AddFunctionToLua("MyFunction2")
AddVariableToLua("MyVariable1")
...
ExecuteLuaScript("C:\myScript.lua")
CloseLua()
What language is the above written in? What application is running it? If this is a Lua script, then "AddFunctionToLua" is simply function name() end. If this is C, then you've already got a C project, no need to "create a new C project". So it's unclear what you're asking.

Description format for an embedded structure

I have a C structure that allow users to configure options in an embedded system. Currently the GUI we use for this is custom written for every different version of this configuration structure. What I'd like for is to be able to describe the structure members in some format that can be read by the client configuration application, making it universal across all of our systems.
I've experimented with describing the structure in XML and having the client read the file; this works in most cases except those where some of the fields have inter-dependencies. So the format that I use needs to have a way to specify these; for instance, member A must always be less than or equal to half of member B.
Thanks in advance for your thoughts and suggestions.
EDIT:
After reading the first reply I realized that my question is indeed a little too vague, so here's another attempt:
The embedded system needs to have access to the data as a C struct, running any other language on the processor is not an option. Basically, all I need is a way to define metadata with the structure, this metadata will be downloaded onto flash along with firmware. The client configuration utility will then read the metadata file over RS-232, CAN etc. and populate a window (a tree-view) that the user can then use to edit options.
The XML file that I mentioned tinkering with was doing exactly that, it contained the structure member name, data type, number of elements etc. The location of the member within the XML file implicitly defined its position in the C struct. This file resides on flash and is read by the configuration program; the only thing lacking is a way to define dependencies between structure fields.
The code is generated automatically using MATLAB / Simulink so I do have access to a scripting language to help with the structure creation. For example, if I do end up using XML the structure will only be defined in the XML format and I'll use a script to create the C structure during code generation.
Hope this is clearer.
For the simple case where there is either no relationship or a relationship with a single other field, you could add two fields to the structure: the "other" field number and a pointer to a function that compares the two. Then you'd need to create functions that compared two values and return true or false depending upon whether or not the relationship is met. Well, guess you'd need to create two functions that tested the relationship and the inverse of the relationship (i.e. if field 1 needs to be greater than field 2, then field 2 needs to be less than or equal to field 1). If you need to place more than one restriction on the range, you can store a pointer to a list of function/field pairs.
An alternative is to create a validation function for every field and call it when the field is changed. Obviously this function could be as complex as you wanted but might require more hand coding.
In theory you could generate the validation functions for either of the above techniques from the XML description that you described.
I would have expected you to get some answers by now, but let me see what I can do.
Your question is a bit vague, but it sounds like you want one of
Code generation
An embedded extension language
A hand coded run-time mini language
Code Generation
You say that you are currently hand tooling the configuration code each time you change this. I'm willing to bet that this is a highly repetitive task, so there is no reason that you can't write program to do it for you. Your generator should consume some domain specific language and emit c code and header files which you subsequently build into you application. An example of what I'm talking about here would be GNU gengetopt. There is nothing wrong with the idea of using xml for the input language.
Advantages:
the resulting code can be both fast and compact
there is no need for an interpreter running on the target platform
Disadvantages:
you have to write the generator
changing things requires a recompile
Extension Language
Tcl, python and other languages work well in conjunction with c code, and will allow you to specify the configuration behavior in a dynamic language rather than mucking around with c typing and strings and and and...
Advantages:
dynamic language probably means the configuration code is simpler
change configuration options without recompiling
Disadvantages:
you need the dynamic language running on the target platform
Mini language
You could write your own embedded mini-language.
Advantages:
No need to recompile
Because you write it it will run on your target
Disadvantages:
You have to write it yourself
How much does the struct change from version to version? When I did this kind of thing I hardcoded it into the PC app, which then worked out what the packet meant from the firmware version - but the only changes were usually an extra field added onto the end every couple of months.
I suppose I would use something like the following if I wanted to go down the metadata route.
typedef struct
{
unsigned char field1;
unsigned short field2;
unsigned char a_string[4];
} data;
typedef struct
{
unsigned char name[16];
unsigned char type;
unsigned char min;
unsigned char max;
} field_info;
field_info fields[3];
void init_meta(void)
{
strcpy(fields[0].name, "field1");
fields[0].type = TYPE_UCHAR;
fields[0].min = 1;
fields[0].max = 250;
strcpy(fields[1].name, "field2");
fields[1].type = TYPE_USHORT;
fields[1].min = 0;
fields[1].max = 0xffff;
strcpy(fields[2].name, "a_string");
fields[2].type = TYPE_STRING;
fields[2].min = 0 // n/a
fields[2].max = 0 // n/a
}
void send_meta(void)
{
rs232_packet packet;
memcpy(packet.payload, fields, sizeof(fields));
packet.length = sizeof(fields);
send_packet(packet);
}

P/Invoke with [Out] StringBuilder / LPTSTR and multibyte chars: Garbled text?

I'm trying to use P/Invoke to fetch a string (among other things) from an unmanaged DLL, but the string comes out garbled, no matter what I try.
I'm not a native Windows coder, so I'm unsure about the character encoding bits. The DLL is set to use "Multi-Byte Character Set", which I can't change (because that would break other projects). I'm trying to add a wrapper function to extract some data from some existing classes. The string in question currently exists as a CString, and I'm trying to copy it to an LPTSTR, hoping to get it into a managed StringBuilder.
This is what I have done that I believe is the closest to being correct (I have removed the irrelevant bits, obviously):
// unmanaged function
DLLEXPORT void Test(LPTSTR result)
{
// eval->result is a CString
_tcscpy(result, (LPCTSTR)eval->result);
}
// in managed code
[DllImport("Test.dll", CharSet = CharSet.Auto)]
static extern void Test([Out] StringBuilder result);
// using it in managed code
StringBuilder result = new StringBuilder();
Test(result);
// contents in result garbled at this point
// just for comparison, this unmanaged consumer of the same function works
LPTSTR result = new TCHAR[100];
Test(result);
Really appreciate any tips! Thanks!!!
One problem is using CharSet.Auto.
On an NT-based system this will assume that the result parameter in the native DLL will be using Unicode. Change that to CharSet.Ansi and see if you get better results.
You also need to size the buffer of the StringBuilder that you're passing in:
StringBuilder result = new StringBuilder(100); // problem if more than 100 characters are returned
Also - the native C code is using 'TCHAR' types and macros - this means that it could be built for Unicode. If this might happen it complicates the CharSet situation in the DllImportAtribute somewhat - especially if you don't use the TestA()/TestW() naming convention for the native export.
Dont use out paramaeter as you are not allocating in c function
[DllImport("Test.dll", CharSet = CharSet.Auto)]
static extern void Test(StringBuilder result);
StringBuilder result = new StringBuilder(100);
Test(result);
This should work for you
You didn't describe what your garbled string looks like. I suspect you are mixing up some MBCS strings and UCS-2 strings (using 2-byte wchar_ts). If every other byte is 0, then you are looking a UCS-2 string (and possibly misusing it as an MBCS string). If every other byte is not 0, then you are probably looking at an MBCS string (and possibly misusing it as a Unicode string).
In general, I would recommend not using TCHARs (or LPTSRs). They use macro magic to switch between char (1 byte) and wchar_t (2 bytes), depending on whether _UNICODE is #defined. I prefer to explicit use chat and wchar_t to make the codes intent very clear. However, you will need to call the -A or -W forms of any Win32 APIs that use TCHAR parameters: e.g. MessageBoxA() or MessageBoxW() instead of MessageBox() (which is a macro that checks whether _UNICODE is #defined.
Then you should change CharSet = CharSet.Auto to something CharSet = CharSet.Ansi (if both caller and callee are using MBCS) or CharSet = CharSet.Unicode (if both caller and callee are using UCS-2 Unicode). But it sounds like your DLL is using MBCS, not Unicode.
pinvoke.net is a great wiki reference with many examples of P/Invoke function signatures for Win32 APIs: