(IAsyncOperation for non WinRT types) Run coroutine task on Windows thread pool - c++-winrt

I'm trying to provide co-routine support for non WinRT types that will asynchronously execute on the Windows thread pool.
cppcoro and libunifex both provide the equivalent coroutine task type task<T>. I imagine it would be a good idea to run these tasks on the windows thread pool, but maybe I'm just being silly and should just use one of the thread pools provided these libraries.
On the other hand, I was hoping to see how cppwinrt handles this but I cant penetrate definition of IAsyncOperation
template <typename TResult>
struct __declspec(empty_bases) IAsyncOperation :
winrt::Windows::Foundation::IInspectable,
impl::consume_t<winrt::Windows::Foundation::IAsyncOperation<TResult>>,
impl::require<winrt::Windows::Foundation::IAsyncOperation<TResult>, winrt::Windows::Foundation::IAsyncInfo>
{
static_assert(impl::has_category_v<TResult>, "TResult must be WinRT type.");
IAsyncOperation(std::nullptr_t = nullptr) noexcept {}
IAsyncOperation(void* ptr, take_ownership_from_abi_t) noexcept : winrt::Windows::Foundation::IInspectable(ptr, take_ownership_from_abi) {}
};
Kenny Kerr did a talk at cppcon 2016 on this so maybe ill try watching that and then try greping the source code/generated headers.

The most hardest part about learning to implement coroutines is disambiguating the difference between awaitable types and coroutine promise types.
concept awaitable:
await_ready() -> bool
await_suspend(coroutine_handle<> handle) -> void
await_resume() -> void
concept promise_type:
get_return_object() -> coroutine_return_type<T>
return_value(T const& v) -> void
unhandled_exception() -> void
initial_suspend() -> awaitable
final_suspend() noexcept -> awaitable
For now, I only need to poll if the object returned by my coroutine is ready (rather than co_await it), but I need to be able to await on winrt::resume_on_signal in the coroutine. cppwinrt does the work implementing the awaitable type in winrt::resume_on_signal and I need to do the work implementing the promise type returned by the coroutine I call winrt::resume_on_signal in.
The simplest possible type you could return from a coroutine might be std::shared_ptr<std::optional<T> and specializing coroutine_traits allows you to do this:
namespace std::experimental
{
template <typename T, typename... Args>
struct coroutine_traits<std::shared_ptr<std::optional<T>>, Args...>
{
struct promise_type
{
using result_type = std::shared_ptr<std::optional<T>>;
result_type result = std::make_shared<std::optional<T>>(std::nullopt);
result_type get_return_object() { return result; }
void return_value(T const& v) { *result = std::optional<T>(v); }
void unhandled_exception() { assert(0); }
std::experimental::suspend_never initial_suspend() { return{}; }
std::experimental::suspend_never final_suspend() noexcept { return{}; }
};
};
}
which you can use as
std::shared_ptr<std::optional<int>> my_coroutine2(HANDLE h)
{
co_await winrt::resume_on_signal(h);
co_return int(1);
}
...
auto shared_optional = my_coroutine2();
if (*shared_optional)
{
auto result = **shared_optional;
}
However, there are caveats. You will create a race condition if you also make this an awaitable type oldnewthing, although I don't beleive you even could make it awaitable because you need to be able call a continuation when the value is set. And it doesn't handle exceptions.

Edit: In hindsight, I could have just used CreateThreadpoolWait and never bothered with coroutines.
The performance of concurrency::task is very good, so that may have been a better solution to my problem, but I'm familiar with cppwinrt coroutines and they provide a very convenient winrt::resume_on_signal which I will use.
If you want to learn how to implement coroutine promise types I recommend watching James McNellis' talk.
For how cppwinrt implements their asynchrony with coroutines watch https://www.youtube.com/watch?v=v0SjumbIips (they do it by specializing coroutine_traits)
template <typename... Args>
struct coroutine_traits<winrt::Windows::Foundation::IAsyncAction, Args...> {...}
But I just followed Raymond Chen's advice and wrote a wrapper around IInspectable
template<typename T>
struct IInspectableWrapper : winrt::implements<IInspectableWrapper<T>, winrt::Windows::Foundation::IInspectable>
{
IInspectableWrapper(T state) : state_(state) {}
static winrt::Windows::Foundation::IInspectable box(T state)
{
return winrt::make<IInspectableWrapper>(state);
}
static T unbox(const Windows::Foundation::IInspectable& inspectable)
{
return winrt::get_self<IInspectableWrapper>(inspectable)->state_;
}
private:
T state_;
};
Which can be used
winrt::Windows::Foundation::IAsyncOperation<IInspectable> my_coroutine()
{
co_return IInspectableWrapper<int>::box(1);
}
...
auto boxed_result = co_await my_coroutine();
auto result = IInspectableWrapper<int>::unbox(boxed_result);
Although the unboxing is unsafe.

Also you could just pass an out parameter to you coroutine and return an IAsyncAction
winrt::Windows::Foundation::IAsyncAction my_coroutine3(std::weak_ptr<int> weak_i)
{
if(auto shared_i = weak_i.lock())
*shared_i = 1;
}
...
auto i = std::make_shared<int>();
auto action = my_coroutine3(i);
auto future = std::tuple{std::move(i), std::move(action)};

Related

Axonframework, how to use MessageDispatchInterceptor with reactive repository

I have read the set-based consistency validation blog and I want to validate through a dispatch interceptor. I follow the example, but I use reactive repository and it doesn't really work for me. I have tried both block and not block. with block it throws error, but without block it doesn't execute anything. here is my code.
class SubnetCommandInterceptor : MessageDispatchInterceptor<CommandMessage<*>> {
#Autowired
private lateinit var privateNetworkRepository: PrivateNetworkRepository
override fun handle(messages: List<CommandMessage<*>?>): BiFunction<Int, CommandMessage<*>, CommandMessage<*>> {
return BiFunction<Int, CommandMessage<*>, CommandMessage<*>> { index: Int?, command: CommandMessage<*> ->
if (CreateSubnetCommand::class.simpleName == (command.payloadType.simpleName)){
val interceptCommand = command.payload as CreateSubnetCommand
privateNetworkRepository
.findById(interceptCommand.privateNetworkId)
// ..some validation logic here ex.
// .filter { network -> network.isSubnetOverlap() }
.switchIfEmpty(Mono.error(IllegalArgumentException("Requested subnet is overlap with the previous subnet.")))
// .block() also doesn't work here it throws error
// block()/blockFirst()/blockLast() are blocking, which is not supported in thread reactor-
}
command
}
}
}
Subscribing to a reactive repository inside a message dispatcher is not really recommended and might lead to weird behavior as underling ThreadLocal (used by Axox) is not adapted to be used in reactive programing
Instead, check out Axon's Reactive Extension and reactive interceptors section.
For example what you might do:
reactiveCommandGateway.registerDispatchInterceptor(
cmdMono -> cmdMono.flatMap(cmd->privateNetworkRepository
.findById(cmd.privateNetworkId))
.switchIfEmpty(
Mono.error(IllegalArgumentException("Requested subnet is overlap with the previous subnet."))
.then(cmdMono)));

(C++/CLI) How to get callbacks from Native Code to Managed Code in C++ CLI?

RANT-BEGIN
Before jumping right into already answered band wagon, please read this paper about SE outdated answers https://ieeexplore.ieee.org/document/8669958
Things changes after a time, and I am afraid Computer science is one of the most if not the most field out there where APIs and Interfaces change radically very very fast. Needless to say that a solution that might worked last month might not after latest feature added to a platform/framework. I humbly request you to not mark this question as answered with decade old post when many mainstream things did not even existed. If you dont know latest solution dont bother about it and leave question for someone else who might.
For a community representative of Computer Science where innovations is everyday thing, it is very toxic, new comer unfriendly and conservative.
END-RANT
This question has already been answered by me and will be accepted tomorrow (SE policy). Thank you for your interest.
Many times you have function pointers in unmanaged context which are called by some kind of events, We will see how it can be achieved with Top-Level Functions and also with member functions of a managed class.
Again, Please dont mark it as answered by linking to a decade old posts.
PS:
So many edits due to unstable internet in third world country, yeah bite me!
unmanaged.cpp
#pragma unmanaged
// Declare an unmanaged function type that takes one int arguments and callbacks
// our function after incrementing it by 1
// Note the use of __stdcall for compatibility with managed code
// if your unmanaged callback uses any other calling convention you can
// UnmanagedFunctionPointerAttribute (check msdn for more info) on your delegate
typedef int(__stdcall* ANSWERCB)(int);//Signature of native callback
int TakesCallback(ANSWERCB fp, int a) {
if (fp) {
return fp(a+1);//Native Callback
}
// This code will be executed when passed without fp
return 0;
}
#pragma managed
managed.cpp
using namespace System;
using namespace System::Runtime::InteropServices;
namespace Callbacks {
// Following delegate is for unmanaged code and must match its signature
public delegate void MyNativeDelegate(int i);
// This delegate is for managed/derived code and ideally should have only managed parameters
public delegate void MyManagedDelegate(int i);
public ref class TestCallback {// Our demo Managed class
private:
GCHandle gch;// kept reference so that it can be freed once we are done with it
void NativeCallbackListener(int i);//unmanaged code will call this function
public:
void TriggerCallback(int i); // Its here for demo purposes, usually unmanaged code will call automatically
event MyManagedDelegate^ SomethingHappened;//plain old event
~TestCallback();//free gch in destructor as its managed.
};
};
void Callbacks::TestCallback::NativeCallbackListener(int i) {
// Callback from Native code,
// If you need to transform your arguments do it here, like transforming void* to somekind of native structure.
// and then pass SomethingHappened::raise with Managed Class/Struct
return SomethingHappened::raise(i); // similar to SomethingHappened.Invoke() in c#
}
void Callbacks::TestCallback::TriggerCallback(int i)
{
MyNativeDelegate^ fp = gcnew MyNativeDelegate(this, &TestCallback::NativeCallbackListener);
// use this if your nativecallback function is not a member function MyNativeDelegate^ fp = gcnew MyNativeDelegate(&NativeCallbackListener);
gch = GCHandle::Alloc(fp);
IntPtr ip = Marshal::GetFunctionPointerForDelegate(fp);
ANSWERCB cb = static_cast<ANSWERCB>(ip.ToPointer());// (ANSWERCB)ip.ToPointer(); works aswell
// Simulating native call, it should callback to our function ptr NativeCallbackListener with 2+1;
// Ideally Native code keeps function pointer and calls back without pointer being provided every time.
// Most likely with a dedicated function for that.
TakesCallback(cb, i);
}
void Callbacks::TestCallback::~TestCallBack() {
gch.Free();//Free GCHandle so GC can collect
}
implementation.cpp
using namespace System;
void OnSomethingHappened(int i);
int main(array<System::String^>^ args)
{
auto cb = gcnew Callbacks::TestCallback();
cb->SomethingHappened += gcnew Callbacks::MyManagedDelegate(&OnSomethingHappened);
cb->TriggerCallback(1);
return 0;
}
void OnSomethingHappened(int i)
{
Console::WriteLine("Got call back with " + i);
}

What is the Rust equivalent of a C local static variable? [duplicate]

What is the best way to create and use a struct with only one instantiation in the system? Yes, this is necessary, it is the OpenGL subsystem, and making multiple copies of this and passing it around everywhere would add confusion, rather than relieve it.
The singleton needs to be as efficient as possible. It doesn't seem possible to store an arbitrary object on the static area, as it contains a Vec with a destructor. The second option is to store an (unsafe) pointer on the static area, pointing to a heap allocated singleton. What is the most convenient and safest way to do this, while keeping syntax terse?
Non-answer answer
Avoid global state in general. Instead, construct the object somewhere early (perhaps in main), then pass mutable references to that object into the places that need it. This will usually make your code easier to reason about and doesn't require as much bending over backwards.
Look hard at yourself in the mirror before deciding that you want global mutable variables. There are rare cases where it's useful, so that's why it's worth knowing how to do.
Still want to make one...?
Tips
In the 3 following solutions:
If you remove the Mutex then you have a global singleton without any mutability.
You can also use a RwLock instead of a Mutex to allow multiple concurrent readers.
Using lazy-static
The lazy-static crate can take away some of the drudgery of manually creating a singleton. Here is a global mutable vector:
use lazy_static::lazy_static; // 1.4.0
use std::sync::Mutex;
lazy_static! {
static ref ARRAY: Mutex<Vec<u8>> = Mutex::new(vec![]);
}
fn do_a_call() {
ARRAY.lock().unwrap().push(1);
}
fn main() {
do_a_call();
do_a_call();
do_a_call();
println!("called {}", ARRAY.lock().unwrap().len());
}
Using once_cell
The once_cell crate can take away some of the drudgery of manually creating a singleton. Here is a global mutable vector:
use once_cell::sync::Lazy; // 1.3.1
use std::sync::Mutex;
static ARRAY: Lazy<Mutex<Vec<u8>>> = Lazy::new(|| Mutex::new(vec![]));
fn do_a_call() {
ARRAY.lock().unwrap().push(1);
}
fn main() {
do_a_call();
do_a_call();
do_a_call();
println!("called {}", ARRAY.lock().unwrap().len());
}
Using std::sync::LazyLock
The standard library is in the process of adding once_cell's functionality, currently called LazyLock:
#![feature(once_cell)] // 1.67.0-nightly
use std::sync::{LazyLock, Mutex};
static ARRAY: LazyLock<Mutex<Vec<u8>>> = LazyLock::new(|| Mutex::new(vec![]));
fn do_a_call() {
ARRAY.lock().unwrap().push(1);
}
fn main() {
do_a_call();
do_a_call();
do_a_call();
println!("called {}", ARRAY.lock().unwrap().len());
}
A special case: atomics
If you only need to track an integer value, you can directly use an atomic:
use std::sync::atomic::{AtomicUsize, Ordering};
static CALL_COUNT: AtomicUsize = AtomicUsize::new(0);
fn do_a_call() {
CALL_COUNT.fetch_add(1, Ordering::SeqCst);
}
fn main() {
do_a_call();
do_a_call();
do_a_call();
println!("called {}", CALL_COUNT.load(Ordering::SeqCst));
}
Manual, dependency-free implementation
There are several existing implementation of statics, such as the Rust 1.0 implementation of stdin. This is the same idea adapted to modern Rust, such as the use of MaybeUninit to avoid allocations and unnecessary indirection. You should also look at the modern implementation of io::Lazy. I've commented inline with what each line does.
use std::sync::{Mutex, Once};
use std::time::Duration;
use std::{mem::MaybeUninit, thread};
struct SingletonReader {
// Since we will be used in many threads, we need to protect
// concurrent access
inner: Mutex<u8>,
}
fn singleton() -> &'static SingletonReader {
// Create an uninitialized static
static mut SINGLETON: MaybeUninit<SingletonReader> = MaybeUninit::uninit();
static ONCE: Once = Once::new();
unsafe {
ONCE.call_once(|| {
// Make it
let singleton = SingletonReader {
inner: Mutex::new(0),
};
// Store it to the static var, i.e. initialize it
SINGLETON.write(singleton);
});
// Now we give out a shared reference to the data, which is safe to use
// concurrently.
SINGLETON.assume_init_ref()
}
}
fn main() {
// Let's use the singleton in a few threads
let threads: Vec<_> = (0..10)
.map(|i| {
thread::spawn(move || {
thread::sleep(Duration::from_millis(i * 10));
let s = singleton();
let mut data = s.inner.lock().unwrap();
*data = i as u8;
})
})
.collect();
// And let's check the singleton every so often
for _ in 0u8..20 {
thread::sleep(Duration::from_millis(5));
let s = singleton();
let data = s.inner.lock().unwrap();
println!("It is: {}", *data);
}
for thread in threads.into_iter() {
thread.join().unwrap();
}
}
This prints out:
It is: 0
It is: 1
It is: 1
It is: 2
It is: 2
It is: 3
It is: 3
It is: 4
It is: 4
It is: 5
It is: 5
It is: 6
It is: 6
It is: 7
It is: 7
It is: 8
It is: 8
It is: 9
It is: 9
It is: 9
This code compiles with Rust 1.55.0.
All of this work is what lazy-static or once_cell do for you.
The meaning of "global"
Please note that you can still use normal Rust scoping and module-level privacy to control access to a static or lazy_static variable. This means that you can declare it in a module or even inside of a function and it won't be accessible outside of that module / function. This is good for controlling access:
use lazy_static::lazy_static; // 1.2.0
fn only_here() {
lazy_static! {
static ref NAME: String = String::from("hello, world!");
}
println!("{}", &*NAME);
}
fn not_here() {
println!("{}", &*NAME);
}
error[E0425]: cannot find value `NAME` in this scope
--> src/lib.rs:12:22
|
12 | println!("{}", &*NAME);
| ^^^^ not found in this scope
However, the variable is still global in that there's one instance of it that exists across the entire program.
Starting with Rust 1.63, it can be easier to work with global mutable singletons, although it's still preferable to avoid global variables in most cases.
Now that Mutex::new is const, you can use global static Mutex locks without needing lazy initialization:
use std::sync::Mutex;
static GLOBAL_DATA: Mutex<Vec<i32>> = Mutex::new(Vec::new());
fn main() {
GLOBAL_DATA.lock().unwrap().push(42);
println!("{:?}", GLOBAL_DATA.lock().unwrap());
}
Note that this also depends on the fact that Vec::new is const. If you need to use non-const functions to set up your singleton, you could wrap your data in an Option, and initially set it to None. This lets you use data structures like Hashset which currently cannot be used in a const context:
use std::sync::Mutex;
use std::collections::HashSet;
static GLOBAL_DATA: Mutex<Option<HashSet<i32>>> = Mutex::new(None);
fn main() {
*GLOBAL_DATA.lock().unwrap() = Some(HashSet::from([42]));
println!("V2: {:?}", GLOBAL_DATA.lock().unwrap());
}
Alternatively, you could use an RwLock, instead of a Mutex, since RwLock::new is also const as of Rust 1.63. This would make it possible to read the data from multiple threads simultaneously.
If you need to initialize with non-const functions and you'd prefer not to use an Option, you could use a crate like once_cell or lazy-static for lazy initialization as explained in Shepmaster's answer.
From What Not To Do In Rust
To recap: instead of using interior mutability where an object changes
its internal state, consider using a pattern where you promote new
state to be current and current consumers of the old state will
continue to hold on to it by putting an Arc into an RwLock.
use std::sync::{Arc, RwLock};
#[derive(Default)]
struct Config {
pub debug_mode: bool,
}
impl Config {
pub fn current() -> Arc<Config> {
CURRENT_CONFIG.with(|c| c.read().unwrap().clone())
}
pub fn make_current(self) {
CURRENT_CONFIG.with(|c| *c.write().unwrap() = Arc::new(self))
}
}
thread_local! {
static CURRENT_CONFIG: RwLock<Arc<Config>> = RwLock::new(Default::default());
}
fn main() {
Config { debug_mode: true }.make_current();
if Config::current().debug_mode {
// do something
}
}
Use SpinLock for global access.
#[derive(Default)]
struct ThreadRegistry {
pub enabled_for_new_threads: bool,
threads: Option<HashMap<u32, *const Tls>>,
}
impl ThreadRegistry {
fn threads(&mut self) -> &mut HashMap<u32, *const Tls> {
self.threads.get_or_insert_with(HashMap::new)
}
}
static THREAD_REGISTRY: SpinLock<ThreadRegistry> = SpinLock::new(Default::default());
fn func_1() {
let thread_registry = THREAD_REGISTRY.lock(); // Immutable access
if thread_registry.enabled_for_new_threads {
}
}
fn func_2() {
let mut thread_registry = THREAD_REGISTRY.lock(); // Mutable access
thread_registry.threads().insert(
// ...
);
}
If you want mutable state(NOT Singleton), see What Not to Do in Rust for more descriptions.
Hope it's helpful.
If you are on nightly, you can use LazyLock.
It more or less does what the crates once_cell and lazy_sync do. Those two crates are very common, so there's a good chance they might already by in your Cargo.lock dependency tree. But if you prefer to be a bit more "adventurous" and go with LazyLock, be prepered that it (as everything in nightly) might be a subject to change before it gets to stable.
(Note: Up until recently std::sync::LazyLock used to be named std::lazy::SyncLazy but was recently renamed.)
A bit late to the party, but here's how I worked around this issue (rust 1.66-nightly):
#![feature(const_size_of_val)]
#![feature(const_ptr_write)]
static mut GLOBAL_LAZY_MUT: StructThatIsNotSyncNorSend = unsafe {
// Copied from MaybeUninit::zeroed() with minor modifications, see below
let mut u = MaybeUninit::uninit();
let bytes = mem::size_of_val(&u);
write_bytes(u.as_ptr() as *const u8 as *mut u8, 0xA5, bytes); //Trick the compiler check that verifies pointers and references are not null.
u.assume_init()
};
(...)
fn main() {
unsafe {
let mut v = StructThatIsNotSyncNorSend::new();
mem::swap(&mut GLOBAL_LAZY_MUT, &mut v);
mem::forget(v);
}
}
Beware that this code is unbelievably unsafe, and can easily end up being UB if not handled correctly.
You now have a !Send !Sync value as a global static, without the protection of a Mutex. If you access it from multiple threads, even if just for reading, it's UB. If you don't initialize it the way shown, it's UB, because it calls Drop on an actually unitialized value.
You just convinced the rust compiler that something that is UB is not UB. You just convinced that putting a !Sync and !Send in a global static is fine.
If unsure, don't use this snippet.
My limited solution is to define a struct instead of a global mutable one. To use that struct, external code needs to call init() but we disallow calling init() more than once by using an AtomicBoolean (for multithreading usage).
static INITIATED: AtomicBool = AtomicBool::new(false);
struct Singleton {
...
}
impl Singleton {
pub fn init() -> Self {
if INITIATED.load(Ordering::Relaxed) {
panic!("Cannot initiate more than once")
} else {
INITIATED.store(true, Ordering::Relaxed);
Singleton {
...
}
}
}
}
fn main() {
let singleton = Singleton::init();
// panic here
// let another_one = Singleton::init();
...
}

How to modify variables outside of their scope in kotlin?

I understand that in Kotlin there is no such thing as "Non-local variables" or "Global Variables" I am looking for a way to modify variables in another "Scope" in Kotlin by using the function below:
class Listres(){
var listsize = 0
fun gatherlistresult(){
var listallinfo = FirebaseStorage.getInstance()
.getReference()
.child("MainTimeline/")
.listAll()
listallinfo.addOnSuccessListener {
listResult -> listsize += listResult.items.size
}
}
}
the value of listsize is always 0 (logging the result from inside of the .addOnSuccessListener scope returns 8) so clearly the listsize variable isn't being modified. I have seen many different posts about this topic on other sites , but none fit my usecase.
I simply want to modify listsize inside of the .addOnSuccessListener callback
This method will always be returned 0 as the addOnSuccessListener() listener will be invoked after the method execution completed. The addOnSuccessListener() is a callback method for asynchronous operation and you will get the value if it gives success only.
You can get the value by changing the code as below:
class Demo {
fun registerListResult() {
var listallinfo = FirebaseStorage.getInstance()
.getReference()
.child("MainTimeline/")
.listAll()
listallinfo.addOnSuccessListener {
listResult -> listsize += listResult.items.size
processResult(listsize)
}
listallinfo.addOnFailureListener {
// Uh-oh, an error occurred!
}
}
fun processResult(listsize: Int) {
print(listResult+"") // you will get the 8 here as you said
}
}
What you're looking for is a way to bridge some asynchronous processing into a synchronous context. If possible it's usually better (in my opinion) to stick to one model (sync or async) throughout your code base.
That being said, sometimes these circumstances are out of our control. One approach I've used in similar situations involves introducing a BlockingQueue as a data pipe to transfer data from the async context to the sync context. In your case, that might look something like this:
class Demo {
var listSize = 0
fun registerListResult() {
val listAll = FirebaseStorage.getInstance()
.getReference()
.child("MainTimeline/")
.listAll()
val dataQueue = ArrayBlockingQueue<Int>(1)
listAll.addOnSuccessListener { dataQueue.put(it.items.size) }
listSize = dataQueue.take()
}
}
The key points are:
there is a blocking variant of the Queue interface that will be used to pipe data from the async context (listener) into the sync context (calling code)
data is put() on the queue within the OnSuccessListener
the calling code invokes the queue's take() method, which will cause that thread to block until a value is available
If that doesn't work for you, hopefully it will at least inspire some new thoughts!

Accessing a C/C++ structure of callbacks through a DLL's exported function using JNA

I have a vendor supplied .DLL and an online API that I am using to interact with a piece of radio hardware; I am using JNA to access the exported functions through Java (because I don't know C/C++). I can call basic methods and use some API structures successfully, but I am having trouble with the callback structure. I've followed the TutorTutor guide here and also tried Mr. Wall's authoritative guide here, but I haven't been able to formulate the Java side syntax for callbacks set in a structure correctly.
I need to use this exported function:
BOOL __stdcall SetCallbacks(INT32 hDevice,
CONST G39DDC_CALLBACKS *Callbacks, DWORD_PTR UserData);
This function references the C/C++ Structure:
typedef struct{
G39DDC_IF_CALLBACK IFCallback;
//more omitted
} G39DDC_CALLBACKS;
...which according to the API has these Members (Note this is not an exported function):
VOID __stdcall IFCallback(CONST SHORT *Buffer, UINT32 NumberOfSamples,
UINT32 CenterFrequency, WORD Amplitude,
UINT32 ADCSampleRate, DWORD_PTR UserData);
//more omitted
I have a G39DDCAPI.java where I have loaded the DLL library and reproduced the API exported functions in Java, with the help of JNA. Simple calls to that work well.
I also have a G39DDC_CALLBACKS.java where I have implemented the above C/C++ structure in a format works for other API structures. This callback structure is where I am unsure of the syntax:
import java.util.Arrays;
import java.util.List;
import java.nio.ShortBuffer;
import com.sun.jna.Structure;
import com.sun.jna.platform.win32.BaseTSD.DWORD_PTR;
import com.sun.jna.win32.StdCallLibrary.StdCallCallback;
public class G39DDC_CALLBACKS extends Structure {
public G39DDC_IF_CALLBACK IFCallback;
//more omitted
protected List getFieldOrder() {
return Arrays.asList(new String[] {
"IFCallback","DDC1StreamCallback" //more omitted
});
}
public static interface G39DDC_IF_CALLBACK extends StdCallCallback{
public void invoke(ShortBuffer _Buffer,int NumberOfSamples,
int CenterFrequency, short Amplitude,
int ADCSampleRate, DWORD_PTR UserData);
}
}
Edit: I made my arguments more type safe as Technomage suggested. I am still getting a null pointer exception with several attempts to call the callback. Since I'm not sure of my syntax regarding the callback structure above, I can't pinpoint my problem in the main below. Right now the relevant section looks like this:
int NumberOfSamples=65536;//This is usually 65536.
ShortBuffer _Buffer = ShortBuffer.allocate(NumberOfSamples);
int CenterFrequency=10000000;//Specifies center frequency (in Hz) of the useful band
//in received 50 MHz wide snapshot.
short Amplitude=0;//The possible value is 0 to 32767.
int ADCSampleRate=100;//Specifies sample rate of the ADC in Hz.
DWORD_PTR UserData = null;
G39DDC_CALLBACKS callbackStruct= new G39DDC_CALLBACKS();
lib.SetCallbacks(hDevice,callbackStruct,UserData);
//hDevice is a handle for the hardware device used-- works in other uses
//lib is a reference to the library in G39DDCAPI.java-- works in other uses
//The UserData is a big unknown-- I don't know what to do with this variable
//as a DWORD_PTR
callbackStruct.IFCallback.invoke(_Buffer, NumberOfSamples, CenterFrequency,
Amplitude, ADCSampleRate, UserData);
EDIT NO 2:
I have one callback working somewhat, but I don't have control over the buffers. More frustratingly, a single call to invoke the method will result in several runs of the custom callback, usually with multiple output files (results vary drastically from run to run). I don't know if it is because I am not allocating memory correctly on the Java side, because I cannot free the memory on the C/C++ side, or because I have no cue on which to tell Java to access the buffer, etc. Relevant code looks like:
//before this, main method sets library, starts DDCs, initializes some variables...
//API call to start IF
System.out.print("Starting IF... "+lib.StartIF(hDevice, Period)+"\n")
G39DDC_CALLBACKS callbackStructure = new G39DDC_CALLBACKS();
callbackStructure.IFCallback = new G39DDC_IF_CALLBACK(){
#Override
public void invoke(Pointer _Buffer, int NumberOfSamples, int CenterFrequency,
short Amplitude, int ADCSampleRate, DWORD_PTR UserData ) {
//notification
System.out.println("Invoked IFCallback!!");
try {
//ready file and writers
File filePath = new File("/users/user/G39DDC_Scans/");
if (!filePath.exists()){
System.out.println("Making new directory...");
filePath.mkdir();
}
String filename="Scan_"+System.currentTimeMillis();
File fille= new File("/users/user/G39DDC_Scans/"+filename+".txt");
if (!fille.exists()) {
System.out.println("Making new file...");
fille.createNewFile();
}
FileWriter fw = new FileWriter(fille.getAbsoluteFile());
//callback body
short[] deBuff=new short[NumberOfSamples];
int offset=0;
int arraySize=NumberOfSamples;
deBuff=_Buffer.getShortArray(offset,arraySize);
for (int i=0; i<NumberOfSamples; i++){
String str=deBuff[i]+",";
fw.write(str);
}
fw.close();
} catch (IOException e1) {
System.out.println("IOException: "+e1);
}
}
};
lib.SetCallbacks(hDevice, callbackStructure,UserData);
System.out.println("Main, before callback invocation");
callbackStructure.IFCallback.invoke(s_Pointer, NumberOfSamples, CenterFrequency, Amplitude, ADCSampleRate, UserData);
System.out.println("Main, after callback invocation");
//suddenly having trouble stopping DDCs or powering off device; assume it has to do with dll using the functions above
//System.out.println("StopIF: " + lib.StopIF(hDevice));//API function returns boolean value
//System.out.println("StopDDC2: " + lib.StopDDC2( hDevice, Channel));
//System.out.println("StopDDC1: " + lib.StopDDC1( hDevice, Channel ));
//System.out.println("test_finishDevice: " + test_finishDevice( hDevice, lib));
System.out.println("Program Exit");
//END MAIN METHOD
You need to extend StdCallCallback, for one, otherwise you'll likely crash when the native code tries to call the Java code.
Any place you see a Windows type with _PTR, you should use a PointerType - the platform package with JNA includes definitions for DWORD_PTR and friends.
Finally, you can't have a primitive array argument in your G39DDC_IF_CALLBACK. You'll need to use Pointer or an NIO buffer; Pointer.getShortArray() may then be used to extract the short[] by providing the desired length of the array.
EDIT
Yes, you need to initialize your callback field in the callbacks structure before passing it into your native function, otherwise you're just passing a NULL pointer, which will cause complaints on the Java or native side or both.
This is what it takes to create a callback, using an anonymous instance of the declared callback function interface:
myStruct.callbackField = new MyCallback() {
public void invoke(int arg) {
// do your stuff here
}
};