I'm trying to use the PiGPIO library with Kotlin Native as a linked library (not using the deamon).
So I'm using C interop with a .def file that references the pigpio.h file.
It works (I managed to get a LED blinking) but there is an issue with the typing of integers.
Althoug I didn't enable the experimental unsigned integers feature, the generated stubs are using type UInt.
For example for the parameters of this function:
#kotlinx.cinterop.internal.CCall public external fun gpioSetMode(gpio: kotlin.UInt, mode: kotlin.UInt): kotlin.Int { /* compiled code */ }
That's OK with me as they are of type unsigned in C and I want this to be as type-safe as possible:
int gpioSetMode(unsigned gpio, unsigned mode);
Now the problem is that the values to be used as parameters for the functions are defined using macro definitions in the .h file. For example for the mode parameter:
#define PI_INPUT 0
#define PI_OUTPUT 1
The generated Kotlin constants corresponding to those values are of type Int:
public const val PI_INPUT: kotlin.Int /* compiled code */
public const val PI_OUTPUT: kotlin.Int /* compiled code */
However, although calling the function with the constant as a parameter is possible:
gpioSetMode(14, PI_OUTPUT) // compiles fine
I can't create a method that takes the mode as a parameter and use it:
fun main() {
setMode(PI_OUTPUT) // fails to compile (Type Mismatch)
}
fun setMode(mode : UInt) {
gpioSetMode(14, mode)
}
Is there a way to force all constants of positive integers to be of type UInt ?
AFAIK, there is no such option in the cinterop tool. In fact, one can say that the problem grows from the library header not using unsigned literals in it's "define" section. But it can be omitted in C, so this header is fine. The tool here is a bit nerdier, so it assumes all integer literals with no additional suffix as the signed typed. About the way that your generated function works. In Kotlin, there is a smart cast concept(see here), but here it is a problem. In this documentation part, there is a note on smart-casting availability only for checks inside a module. In your case, gpioSetMode(gpio, mode) and PI_OUTPUT are located in the same module, while your setMode is in another one.That's why the first call compiles and the second one does not. I managed to workaround it in my small sample like that: just added into my code this like, redefining the constant
import my.*
const val PI_OUTPUT = my.PI_OUTPUT
where my is the library package, most probably pigpio for you. After that, smart casts will be available for the library functions, and all functions you declare in this module.
Related
This code:
#include <stdio.h>
int main()
{
void (^a)(void) = ^ void () { printf("test"); } ;
a();
}
Compile without warning with clang -Weverything -pedantic -std=c89 (version clang-800.0.42.1) and print test.
I could not find any information about standard C having lambda, also gcc has it's own syntax for lambda and it would be strange for them do this if a standard solution existed.
This behavior seems to be specfic to newer versions of Clang, and is a language extension called "blocks".
The Wikipedia article on C "blocks" also provides information which supports this claim:
Blocks are a non-standard extension added by Apple Inc. to Clang's implementations of the C, C++, and Objective-C programming languages that uses a lambda expression-like syntax to create closures within these languages. Blocks are supported for programs developed for Mac OS X 10.6+ and iOS 4.0+, although third-party runtimes allow use on Mac OS X 10.5 and iOS 2.2+ and non-Apple systems.
Emphasis above is mine. On Clang's language extension page, under the "Block type" section, it gives a brief overview of what the Block type is:
Like function types, the Block type is a pair consisting of a result value type and a list of parameter types very similar to a function type. Blocks are intended to be used much like functions with the key distinction being that in addition to executable code they also contain various variable bindings to automatic (stack) or managed (heap) memory.
GCC also has something similar to blocks called lexically scoped nested functions. However, there are some key differences also note in the Wikipedia articles on C blocks:
Blocks bear a superficial resemblance to GCC's extension of C to support lexically scoped nested functions. However, GCC's nested functions, unlike blocks, must not be called after the containing scope has exited, as that would result in undefined behavior.
GCC-style nested functions also require dynamic creation of executable thunks when taking the address of the nested function. [...].
Emphasis above is mine.
the C standard does not define lambdas at all but the implementations can add extensions.
Gcc also added an extension in order for the programming languages that support lambdas with static scope to be able to convert them easily toward C and compile closures directly.
Here is an example of extension of gcc that implements closures.
#include <stdio.h>
int(*mk_counter(int x))(void)
{
int inside(void) {
return ++x;
}
return inside;
}
int
main() {
int (*counter)(void)=mk_counter(1);
int x;
x=counter();
x=counter();
x=counter();
printf("%d\n", x);
return 0;
}
I have a Modelica external C function that calls a function that is in a .dll.
In the C function in the .dll I would like to make use of the ModelicaError() function. However when
#include ModelicaUtilities.h is included a number of errors occur.
What is the correct method for doing this?
I take it I'll need to link against an existing Dymola .lib, which one? What should DYMOLA_STATIC be defined as?
Or should I be compiling the .dll in such a way that these missing functions will be available after compilation with the model?
Any insight into this would be great, Thanks
From all what I know it is currently not possible in a tool-independent way to have shared objects (DLLs on Win) depending on ModelicaError (or any other functions of ModelicaUtilities). See https://github.com/modelica/ModelicaSpecification/issues/2191 for the open issue on the Modelica Language specification.
To use ModelicaError function in a dll you send the a pointer to the ModelicaError function. To do this from Dymola create a wrapper function that passes the pointer to the ModelicaError function to the dll function. For example MathLibraryWrapper:
#pragma once
#include "MathLibrary.h"
int fibonacci_next_int_wrap()
{
return fibonacci_next_int(&ModelicaError);
}
This calls the fibonacci_next_int function which is in MathLibary.cpp in the dll. This is modified to accept a pointer to the ModelicaError function.
int fibonacci_next_int(void(*mError)(const char *))
{
(*mError)("broken");
return (int)fibonacci_next();
}
If this is run it will immediately crash with "broken".
I came across an example for a C-function declared as:
static inline CGPoint SOCGPointAdd(const CGPoint a, const CGPoint b) {
return CGPointMake(a.x + b.x, a.y + b.y);
}
Until now, I declared utility C-functions in .h files and implemented them in .m files, just like this:
CGPoint SOCGPointAdd(const CGPoint a, const CGPoint b) {
return CGPointMake(a.x + b.x, a.y + b.y);
}
I can use this function "inline" anywhere I want and it should also be "static" because it's not associated with any object, like an Objective-c method. What is the point / advantage of specifying "static" and "inline"?
inline does not mean you can use the function “inline” (it is normal to use functions inside other functions; you do not need inline for that); it encourages the compiler to build the function into the code where it is used (generally with the goal of improving execution speed).
static means the function name is not externally linked. If the function were not declared static, the compiler is required to make it externally visible, so that it can be linked with other object modules. To do this, the compiler must include a separate non-inline instance of the function. By declaring the function static, you are permitting all instances of it to be inlined in the current module, possibly leaving no separate instance.
static inline is usually used with small functions that are better done in the calling routine than by using a call mechanism, simply because they are so short and fast that actually doing them is better than calling a separate copy. E.g.:
static inline double square(double x) { return x*x; }
If the storage class is extern, the identifier has external linkage and the inline definition also provides the external definition. If the storage class is static, the identifier has internal linkage and the inline definition is invisible in other translation units.
By declaring a function inline, you can direct the compiler to integrate that function's code into the code for its callers (to replace the complete code of that function directly into the place from where it was called). This makes execution faster by eliminating the function-call overhead. That's why inline functions should be very short.
In C, inline means that it is an inline definition. It doesn't have internal linkage, it has no linkage. It never reaches the linker, which means that if the compiler doesn't use that inline definition to inline every single reference to the function in the compilation unit, then there will be a local linker error if a symbol with the same name (C uses unmangled identifiers) with external linkage is not exported by another translation unit in the compilation. The actual inlining of references to the function by the compiler is exclusively controlled by the optimisation flag or __attribute__((always_inline))
There is no difference between static inline and static, both do not inline the function, and provide the function in the assembly output on -O0 as an internal linkage symbol to the linker, and both inline and optimise out the inclusion of the function in the assembly output on -O1. static inline does have one quirk in that you can use a non-static inline prototype before it, except this prototype is ignored and isn't used as a forward declaration (but using a non-static prototype before a static function is an error).
inline (GCC <5.0, which used -std=gnu90 / gnu89 as default) / extern inline (GCC 5.0 onwards, which uses -std=gnu11): This is a compiler only inline definition. Externally visible function emittance (in the assembly output for use of the assembler and linker) for this inline definition does not occur. If all references to the function in the file are not actually inlined by the compiler (and inlining occurs on higher optimisation levels or if you use __attribute__((always_inline)) inline float func()), then there will be a local linker error if the compiler does not emit the external definition to the linker (and if a symbol with the same name with external linkage is not exported by another translation unit). This allows for an inline definition and an out-of-line function of the same symbol to be defined separately, one with inline and the other out-of-line, but not in the same translation unit as the compiler will confuse them, and an out of line definitition will be treated as a redefinition error. Inline definitions are only ever visible to the compiler and each translation unit can have their own. Inline definitions cannot be exported to other files because inline definitions do not reach the linking stage. In order to achieve this at compile-time, the inline definition can be in a header file and included in each translation unit. This means that the use of inline is a compiler directive and extern/static refer to the out-of-line version produced for the linker. If the function is not defined in the translation unit, it cannot be inlined because it's left to the linker. If the function is defined but not inline, then the compiler will use this version if it decides to inline
extern inline (GCC <5.0) / inline (GCC >5.0): an externally visible function is emitted for this inline definition regardless of whether it is inlined or not meaning this specifier can only be used in one of the translation units. This is intuitively the opposite of 'extern'
static inline: locally visible out-of-line function is emitted by the compiler to the assembly output with a local directive for the assembler for this compiler inline definition, but may be optimised out on higher optimisation levels if all the functions are able to be inlined; it will never allow a linker error to result. It behaves identically to static because the compiler will inline the static definition on higher optimisation levels just like static inline.
An inline function that isn't static shouldn't contain non-const static storage duration variables or access static file-scope variables, this will produce a compiler warning. This is because the inline and out-of-line versions of the function will have distinct static variables if the out-of-line version is provided from a different translation unit. The compiler may inline some functions, not emit a local symbol to be linked to those references, and leave the linkage to the linker which might find an external function symbol, which is assumed to be the same function as it has the same identifier. So it reminds the programmer that it should logically be const because modifying and reading the static will result in undefined behaviour; if the compiler inlines this function reference, it will read a fresh static value in the function rather than the one written to in a previous call to the function, where that previous reference to the function was one that wasn't inlined, hence the variable that was written to in the previous call would have been one provided by a different translation unit. In this instance, it results in a copy local to each translation unit and a global copy and it is undefined as to which copy is being accessed. Making it const ensures that all the copies are identical and will never change with respect to each other, making the behaviour defined and known.
Using an inline / extern inline prototype before/after a non-inline definition means that the prototype is ignored.
Using an inline prototype before an inline definition is how to prototype an inline function without side effects, declaring an inline prototype after the inline definition changes nothing unless the storage specifier changes.
Using an extern inline / extern / regular prototype before/after an inline definition is identical to an extern inline definition; it is a hint that provides an external out-of-line definition of the function, using the inline definition.
Using extern inline / inline on a prototype without a definition in the file but it is referenced in the file results in inline being ignored an then it behaves as a regular prototype (extern / regular, which are identical)
Using a static inline / static on a prototype without a definition in the file but it is referenced in the file results in correct linkage and correct type usage but a compiler warning saying that the function with internal linkage has not been defined (so it uses an external definition)
Using a regular / extern / extern inline prototype before a static inline or static definition is a 'static declaration of 'func' follows non-static declaration' error; using it after does nothing and they are ignored. Using a static or static inline prototype before/after a static inline definition is allowed. Using an inline prototype before a static inline definition is ignored and will not act as a forward declaration. This is the only way in which static inline differs from static as a regular prototype before a static definition results in an error, but this does not.
Using a static inline prototype before a regular / extern / static / static inline / extern inline definition results in static inline overriding the specifiers and acts as correctly as a forward declaration.
__attribute__((always_inline)) always inlines the function symbol in the translation unit, and uses this definition. The attribute can only be used on definitions. The storage / inline specifiers are unaffected by this and can be used with it.
Inline functions are for defining in header files.Small functions are defined in header files.
It should be static so that it can acess only static members.
I've inherited a piece of custom test equipment with a control library built in a COM object, and I'm trying to connect it to our Tcl test script library. I can connect to the DLL using TCOM, and do some simple control operations with single int parameters. However, certain features are controlled by passing in a C/C++ struct that contains the control blocks, and attempting to use them in TCOM is giving me an error 0x80020005 {Type mismatch.}. The struct is defined in the .idl file, so it's available to TCOM to use.
The simplest example is a particular call as follows:
C++ .idl file:
struct SourceScaleRange
{
float MinVoltage;
float MaxVoltage;
};
interface IAnalogIn : IDispatch{
...
[id(4), helpstring("method GetAdcScaleRange")] HRESULT GetAdcScaleRange(
[out] struct SourceScaleRange *scaleRange);
...
}
Tcl wrapper:
::tcom::import [file join $::libDir "PulseMeas.tlb"] ::char
set ::characterizer(AnalogIn) [::char::AnalogIn]
set scaleRange ""
set response [$::characterizer(AnalogIn) GetAdcScaleRange scaleRange]
Resulting error:
0x80020005 {Type mismatch.}
while executing
"$::characterizer(AnalogIn) GetAdcScaleRange scaleRange"
(procedure "charGetAdcScaleRange" line 4)
When I dump TCOM's methods, it knows of the name of the struct, at least, but it seems to have dropped the struct keyword. Some introspection code
set ifhandle [::tcom::info interface $::characterizer(AnalogIn)]
puts "methods: [$ifhandle methods]"
returns
methods: ... {4 VOID GetAdcScaleRange {{out {SourceScaleRange *} scaleRange}}} ...
I don't know if this is meaningful or not.
At this point, I'd be happy to get any ideas on where to look next. Is this a known TCOM limitation (undocumented, but known)? Is there a way to pre-process the parameter into an appropriate format using tcom? Do I need to force it into a correctly sized block of memory via binary format by manual construction? Do I need to take the DLL back to the original developer and have him pull out all the struct parameters? (Not likely to happen, in this reality.) Any input is good input.
I'm working on integrating objective-git into my project, but when I include their headers in my sources, I get these errors on several of their enum declarations:
objective-git/Classes/GTRepository.h:57:16: Non-integral type 'git_reset_t' is an invalid underlying type
Here's the code in question:
typedef enum : git_reset_t {
GTRepositoryResetTypeSoft = GIT_RESET_SOFT,
GTRepositoryResetTypeMixed = GIT_RESET_MIXED,
GTRepositoryResetTypeHard = GIT_RESET_HARD
} GTRepositoryResetType;
I changed git_reset_t to NSUInteger (typedef'd to unsigned long), and that got it to compile, but of course I'd rather not have to change the library files.
Objective-git compiles just fine in its own project, and I can't find any significant difference in the compiler settings between that project and mine. What could I be missing?
This is with Xcode 4.5, compiling with Apple llvm 4.1.
Update: The clue I missed was that the error only happened on a .mm file, and .m files were fine, so somehow the underlying enum type doesn't work in C++ (even if I enable C++11). As a workaround I put a fake minimal #interface declaration for the one objective-git class I use in that file so I don't have to include the headers, but I'd still like to find a cleaner solution.
Google turns up this file containing this:
typedef enum {
GIT_RESET_SOFT = 1, /** Move the head to the given commit */
GIT_RESET_MIXED = 2, /** SOFT plus reset index to the commit */
GIT_RESET_HARD = 3, /** MIXED plus changes in working tree discarded */
} git_reset_t;
This is an old-style enumeration with int being the underlying type. But it's not an int, it's a distinct type. And it's not integral and it can't be an underlying type for a new-style enumeration.
The fix is to use typedef enum : int or if you can use C++ and want to be extra expository,
typedef enum : std::underlying_type< git_reset_t >::type
I haven't tried, but you could also try this in ObjC without C++:
typedef enum : __underlying_type( git_reset_t )