I have a LabVIEW application that current sends data to a C++ application via a DLL. I now need to send data back to the LabVIEW app from the C++ one. Can I trigger code in LabVIEW from a DLL call or will I need to poll the DLL periodically to see if new data is waiting?
Or am I going about this in completely the wrong way?
It is possible to generate an event from C++ to trigger a normal LabVIEW event.
Here is a NI forums post discussing this structure.
And a code excerpt from that thread:
#include <utility.h>
#include <extcode.h>
#include "EventDLL.h"
//Generate a LabVIEW event
int GenerateLVEvent(LVUserEventRef *msg, int param)
{
PostLVUserEvent( *msg, (void *)¶m);
return 0;
}
And here's the original sourcecode as a PNG:
(source: vi-lib.com)
And here is the accompanying LabVIEW code:
The lower loop is LabVIEW code that sends a DLL event to the LabVIEW event handler.
This should be placed inside your DLL.
One of the input parameters should be the event pointer as a U32.
Good luck,
Ton
PS if you are going do dive into DLLs and LabVIEW interoperability, pay attention to everything RolfK says, he is a guru in that field.
Related
I was going through the Dinosaur book by Galvin where I faced the difficulty as asked in the question.
Typically application developers design programs according to an application programming interface (API). The API specifies a set of functions that are available to an application programmer, including the parameters that are passed to each function and the return values the programmer can expect.
The text adds that:
Behind the scenes the functions that make up an API typically invoke the actual system calls on behalf of the application programmer. For example, the Win32 function CreateProcess() (which unsurprisingly is used to create a new process) actually calls the NTCreateProcess() system call in the Windows kernel.
From the above two points I came to know that: Programmers using the API, make the function calls to the API corresponding to the system call which they want to make. The concerning function in the API then actually makes the system call.
Next what the text says confuses me a bit:
The run-time support system (a set of functions built into libraries included with a compiler) for most programming languages provides a system-call interface that serves as the link to system calls made available by the operating system. The system-call interface intercepts function calls in the API and invokes the necessary system calls within the operating system. Typically, a number is associated with each system call, and the system-call interface maintains a table indexed according to these numbers. The system call interface then invokes the intended system call in the operating-system kernel and returns the status of the system call and any return values.
The above excerpt makes me feel that the functions in the API does not make the system calls directly. There are probably function built into the system-call interface of the runtime support system, which are waiting for an event of system call from the function in the API.
The above is a diagram in the text explaining the working of the system call interface.
The text later explains the working of a system call in the C standard library as follows:
which is quite clear.
I don't totally understand the terminology of the excerpts you shared. Some terminology is also wrong like in the blue image at the bottom. It says the standard C library provides system call interfaces while it doesn't. The standard C library is just a standard. It is a convention. It just says that, if you write a certain code, then the effect of that code when it is ran should be according to the convention. It also says that the C library intercepts printf() calls while it doesn't. This is general terminology which is confusing at best.
The C library doesn't intercept calls. As an example, on Linux, the open source implementation of the C standard library is glibc. You can browse it's source code here: https://elixir.bootlin.com/glibc/latest/source. When you write C/C++ code, you use standard functions which are specified in the C/C++ convention.
When you write code, this code will be compiled to assembly and then to machine code. Assembly is also a higher level representation of machine code. It is just closer to the actual code as it is easier to translate to it then C/C++. The easiest case to understand is when you compile code statically. When you compile code statically, all code is included in your executable. For example, if you write
#include <stdio.h>
int main() {
printf("Hello, World!");
return 0;
}
the printf() function is called in stdio.h which is a header provided by gcc written specifically for one OS or a set of UNIX-like OSes. This header provides prototypes which are defined in other .c files provided by glibc. These .c files provide the actual implementation of printf(). The printf() function will make a system call which rely on the presence of an OS like Linux to run. When you compile statically, the code is all included up to the system call. You can see my answer here: Who sets the RIP register when you call the clone syscall?. It specifically explains how system calls are made.
In the end you'll have something like assembly code pushing some arguments into some conventionnal registers then the actual syscall instruction which jumps to an MSR. I don't totally understand the mechanism behind printf() but it will jump to the Linux kernel's implementation of the write system call which will write to the console and return.
I think what confuses you is that the "runtime-support system" is probably referring to higher level languages which are not compiled to machine code directly like Python or Java. Java has a virtual machine which translates the bytecode produced by compilation to machine code during runtime using a virtual machine. It can be confusing to not make this distinction when talking about different languages. Maybe your book is lacking examples.
Want to preface this by pointing out I am new to C++/CLI
We have one solution with an unmanaged C++ application project(We'll call "Application") and a C# .net remoting project which builds to a DLL(We'll call "Remoting"), and a CLI project for interfacing between the two(We'll call "Bridge").
Everything seems to be working, we have an IMyEventHandler interface in Bridge which successfully receives events from Remoting and can call methods in Application.
#ifndef EVENTS_HANDLER_INTERFACE_H_INCLUDED
#include "EventsHandlerInterface.h"
#endif
#define DLLEXPORT __declspec(dllexport)
#ifdef __cplusplus
extern "C"
{
#endif
DLLEXPORT bool RegisterAppWithBridge(IMyEventsHandler * aHandler);
DLLEXPORT void PostEventToServer(AppToServerEvent eventType);
DLLEXPORT void PollEventsFromServer();
#ifdef __cplusplus
}
#endif
In Bridge implementation we have a method for handling an event and depending on which event type it is, we will call a different method for handling that exact type:
void Bridge::OnReceiveServerEvent(IMyEvent^ aEvent)
{
// Determine event type
...
Handle_SpecificEventType();
}
This all is working fine so far. Once we call the handler for a known type of event, we can directly cast to it from the generic interface type. And this is where we start to see the issue. All these event types are defined in another DLL generated from C#. Simple events that have just ints or strings work just fine, but we have this SpecificEventType which contains a list of other types(We'll call "AnotherType") all defined in another DLL. All required DLL's have been added as references, and I am able to gcnew a AnotherType without it complaining.
However, once I try to get AnotherType element out of the list, we see the build error: "C2526 'System::Collections::Generic::List::GetEnumerator' C linkage function cannot return C++ class"
void Bridge::Handle_SpecificEventType(IMyEvent ^evt)
{
SpecificEventType ^e = (SpecificEventType ^)evt;
// We can pull the list itself, but accessing elements gives error
System::Collections::Generic:List<AnotherType ^> ^lst = e->ThatList;
// These all cause error
array<AnotherType ^> ^arr = lst->ToArray();
AnotherType ^singleElement = lst[0];
for each(AnotherType ^loopElement in lst){}
}
To clarify why we're doing this, we are trying to take managed events defined in a C# DLL and sent through .net remoting from a newer C# server, and "translate" them for an older unmanaged C++ application. So the end goal is to create a copy of the C# type "SpecificEventType" and translate it to unmanaged "SpecificEventType_Unmanaged" and just make a call to the application with that data:
// Declared in Bridge.h and gets assigned from DLLEXPORT RegisterGameWithBridge method.
IMyEventsHandler *iApplicationEventHandler;
// Bridge.cpp
void Bridge::Handle_SpecificEventType(IMyEvent ^evt)
{
... Convert SpecificEventType to SpecificEventType_Unmanaged
iApplicationEventHandler->Handle_SpecificEvent(eventUnmanaged);
}
This messaging all seems to be working and setup correctly - but it really doesn't want to give us the elements from the generic list - preventing us from pulling the data and building an unmanaged version of the event to send down to the application.
I hope I have explained this well, again I am new to CLI and haven't had to touch C++ for some years now - so let me know if any additional details are needed.
Thanks in advance for any assistance.
Turns out the issue was because all the methods in Bridge implementation were still inside of an extern "C" block. So much time lost - such a simple issue.
I have an unmanaged c++ DLL that calls c# code through a managed c++ wrapper. The unmanaged c++ DLL is a plug-in for some application (outside my control). When this application calls the unmanaged c++ DLL everything works fine until the managed c++ code tries to use the c# code. Then it crashes.
I have written a test application that does the same thing as the application, that is, it calls the unmanaged c++ DLL. This works fine.
The code is as simple as it could be:
unmanaged c++:
extern "C" __declspec(dllexport) void UnmanagedMethodCalledUponByApplication()
{
new Bridge();
}
managed c++:
Bridge::Brigde()
{
gcnew Managed(); // This line crashes
}
c#:
public class Managed
{
}
I have tried to add a try-catch (...) block around the problematic line but it doesn't catch the error.
If I replace the gcnew Managed(); line with MessageBox::Show("Alive!"); it works fine. So my guess is that something is wrong with my c# project settings.
I have tried to compile it with different platforms (Any CPU and x86). I have tried to change target framework. I have tried to call a static method in Managed instead of using gcnew. Still crashing.
Any ideas what might be the problem?
Update:
After advise in comments and answer, I attached the debugger. Now I see that I get a System.IO.FileNotFoundException saying that the managed DLL (or one of its dependencies) can't be found.
Here's a guess: The DLLs are placed together, but they are not located in the current directory. The unmanaged c++ DLL is loaded correctly since the main application specifies the path to it. The managed c++ is actually a lib, so that code works fine as well. But when the managed c++ tries to load the c# DLL it looks for it in the wrong directory.
Update:
The way to fix this is to load the c# DLL dynamically, using reflection.
extern "C" __declspec(dllexport)
Yes, that's a cheap and easy way to get the compiler to generate the required stub that loads and initializes the CLR so it can execute managed code. Problem is, it doesn't do anything reasonable to deal with exceptions thrown by managed code. And managed code likes throwing exceptions, they are a terrific trouble-shooting tool. That stops being terrific when there's no way for you to retrieve the exception information.
The best you could possibly do from native code is use the __try/__except keywords to catch the managed exception. Exception code is 0xe0434f4d. But that still doesn't give you access to the information you need, the exception message and the holy stack trace.
You can debug it. Project + Properties, Debugging, change the Debugger Type to "Mixed". Then Debug + Exceptions, tick the Thrown checkbox for CLR Exceptions. The debugger stops when the exception is thrown so you can see what's wrong.
Getting decent diagnostics after you shipped your code requires a better interop mechanism. Like using COM interop or hosting the CLR yourself.
I could not find any good document on internet about STM32 programming. STM's own documents do not explain anything more than register functions. I will greatly appreciate if anyone can explain my following questions?
I noticed that in all example programs that STM provides, local variables for main() are always defined outside of the main() function (with occasional use of static keyword). Is there any reason for that? Should I follow a similar practice? Should I avoid using local variables inside the main?
I have a gloabal variable which is updated within the clock interrupt handle. I am using the same variable inside another function as a loop condition. Don't I need to access this variable using some form of atomic read operation? How can I know that a clock interrupt does not change its value in the middle of the function execution? Should I need to cancel clock interrupt everytime I need to use this variable inside a function? (However, this seems extremely ineffective to me as I use it as loop condition. I believe there should be better ways of doing it).
Keil automatically inserts a startup code which is written in assembly (i.e. startup_stm32f4xx.s). This startup code has the following import statements:
IMPORT SystemInit
IMPORT __main
.In "C", it makes sense. However, in C++ both main and system_init have different names (e.g. _int_main__void). How can this startup code can still work in C++ even without using "extern "C" " (I tried and it worked). How can the c++ linker (armcc --cpp) can associate these statements with the correct functions?
you can use local or global variables, using local in embedded systems has a risk of your stack colliding with your data. with globals you dont have that problem. but this is true no matter where you are, embedded microcontroller, desktop, etc.
I would make a copy of the global in the foreground task that uses it.
unsigned int myglobal;
void fun ( void )
{
unsigned int myg;
myg=myglobal;
and then only use myg for the rest of the function. Basically you are taking a snapshot and using the snapshot. You would want to do the same thing if you are reading a register, if you want to do multiple things based on a sample of something take one sample of it and make decisions on that one sample, otherwise the item can change between samples. If you are using one global to communicate back and forth to the interrupt handler, well I would use two variables one foreground to interrupt, the other interrupt to foreground. yes, there are times where you need to carefully manage a shared resource like that, normally it has to do with times where you need to do more than one thing, for example if you had several items that all need to change as a group before the handler can see them change then you need to disable the interrupt handler until all the items have changed. here again there is nothing special about embedded microcontrollers this is all basic stuff you would see on a desktop system with a full blown operating system.
Keil knows what they are doing if they support C++ then from a system level they have this worked out. I dont use Keil I use gcc and llvm for microcontrollers like this one.
Edit:
Here is an example of what I am talking about
https://github.com/dwelch67/stm32vld/tree/master/stm32f4d/blinker05
stm32 using timer based interrupts, the interrupt handler modifies a variable shared with the foreground task. The foreground task takes a single snapshot of the shared variable (per loop) and if need be uses the snapshot more than once in the loop rather than the shared variable which can change. This is C not C++ I understand that, and I am using gcc and llvm not Keil. (note llvm has known problems optimizing tight while loops, very old bug, dont know why they have no interest in fixing it, llvm works for this example).
Question 1: Local variables
The sample code provided by ST is not particularly efficient or elegant. It gets the job done, but sometimes there are no good reasons for the things they do.
In general, you use always want your variables to have the smallest scope possible. If you only use a variable in one function, define it inside that function. Add the "static" keyword to local variables if and only if you need them to retain their value after the function is done.
In some embedded environments, like the PIC18 architecture with the C18 compiler, local variables are much more expensive (more program space, slower execution time) than global. On the Cortex M3, that is not true, so you should feel free to use local variables. Check the assembly listing and see for yourself.
Question 2: Sharing variables between interrupts and the main loop
People have written entire chapters explaining the answers to this group of questions. Whenever you share a variable between the main loop and an interrupt, you should definitely use the volatile keywords on it. Variables of 32 or fewer bits can be accessed atomically (unless they are misaligned).
If you need to access a larger variable, or two variables at the same time from the main loop, then you will have to disable the clock interrupt while you are accessing the variables. If your interrupt does not require precise timing, this will not be a problem. When you re-enable the interrupt, it will automatically fire if it needs to.
Question 3: main function in C++
I'm not sure. You can use arm-none-eabi-nm (or whatever nm is called in your toolchain) on your object file to see what symbol name the C++ compiler assigns to main(). I would bet that C++ compilers refrain from mangling the main function for this exact reason, but I'm not sure.
STM's sample code is not an exemplar of good coding practice, it is merely intended to exemplify use of their standard peripheral library (assuming those are the examples you are talking about). In some cases it may be that variables are declared external to main() because they are accessed from an interrupt context (shared memory). There is also perhaps a possibility that it was done that way merely to allow the variables to be watched in the debugger from any context; but that is not a reason to copy the technique. My opinion of STM's example code is that it is generally pretty poor even as example code, let alone from a software engineering point of view.
In this case your clock interrupt variable is atomic so long as it is 32bit or less so long as you are not using read-modify-write semantics with multiple writers. You can safely have one writer, and multiple readers regardless. This is true for this particular platform, but not necessarily universally; the answer may be different for 8 or 16 bit systems, or for multi-core systems for example. The variable should be declared volatile in any case.
I am using C++ on STM32 with Keil, and there is no problem. I am not sure why you think that the C++ entry points are different, they are not here (Keil ARM-MDK v4.22a). The start-up code calls SystemInit() which initialises the PLL and memory timing for example, then calls __main() which performs global static initialisation then calls C++ constructors for global static objects before calling main(). If in doubt, step through the code in the debugger. It is important to note that __main() is not the main() function you write for your application, it is a wrapper with different behaviour for C and C++, but which ultimately calls your main() function.
I've using vb.net 2003 and some of the times this error arises. Can anyone know on how this error arises and how to fix it?
Error: The requested clipboard operation failed
I googled this question to see what I'd see, and a lot of people have asked this question, and none of them have gotten a solid answer...
So I went to the MSDN documentation and found a note that explains what most people who have asked this question describe... The symptom usually appears when the user switches to another application while the code is running. The note is quoted below, with the link to the documentation following:
All Windows-based applications share
the system Clipboard, so the contents
are subject to change when you switch
to another application.
An object must be serializable for it
to be put on the Clipboard. If you
pass a non-serializable object to a
Clipboard method, the method will fail
without throwing an exception. See
System.Runtime.Serialization for more
information on serialization. If your
target application requires a very
specific data format, the headers
added to the data in the serialization
process may prevent the application
from recognizing your data. To
preserve your data format, add your
data as a Byte array to a MemoryStream
and pass the MemoryStream to the
SetData method.
The Clipboard class can only be used
in threads set to single thread
apartment (STA) mode. To use this
class, ensure that your Main method is
marked with the STAThreadAttribute
attribute.
Special considerations may be
necessary when using the metafile
format with the Clipboard. Due to a
limitation in the current
implementation of the DataObject
class, the metafile format used by the
.NET Framework may not be recognized
by applications that use an older
metafile format. In this case, you
must interoperate with the Win32
Clipboard application programming
interfaces (APIs). For more
information, see article 323530,
"Metafiles on Clipboard Are Not
Visible to All Applications," in the
Microsoft Knowledge Base at
http://support.microsoft.com.
http://msdn.microsoft.com/en-us/library/system.windows.forms.clipboard.aspx
Funnily enough, this makes sense of a strange behavior I noticed in one of my own apps. I have an app that writes to an Excel spreadsheet (actually, to hundreds of them, modifying hundreds of cells each). I don't use the clipboard at all, just the Interop API for excel, yet when it's running, my clipboard clears every time a new spreadsheet is created. In my case, Excel is messing with the clipboard, even there is no discernible reason for it to do so. I'd chalk it up to one of those mysterious Windows phenomena that we mortals will never understand.
At any rate, thanks to your question, I think I understand my issue, so +1 to you for helping me out.
I have that error while trying to:
Clipboard.Clear();
...
Clipboard.SetText(...);
For solving it I replace Clipboard.Clear() with pinvoking some methods from the user32.dll:
[DllImport("user32.dll")]
static extern IntPtr GetOpenClipboardWindow();
[DllImport("user32.dll")]
private static extern bool OpenClipboard(IntPtr hWndNewOwner);
[DllImport("user32.dll")]
static extern bool EmptyClipboard();
[DllImport("user32.dll", SetLastError=true)]
static extern bool CloseClipboard();
...
IntPtr handleWnd = GetOpenClipboardWindow();
OpenClipboard(handleWnd);
EmptyClipboard();
CloseClipboard();
...
Clipboard.SetText(...);
I use C# here, but vb version could be easily created from it.
Is there a chance that UltraVNC is running. I have issues when that application is running in the background on the client PC side. When I close VNC, I can copy to the clipboard successfully. This is not really a satisfying solution but at least I know in my case the source of the problem.