Can flutter native side use eventchannel transfer the MAP data? - flutter-platform-channel

I am running the flutter platform channel "Eventchannel" with windows platform
I know this not mention in official platform channel document
but I found some example that work with windows and I test it.
Now, I can transfer the single data from below code:
void initEventChannel(flutter::FlutterEngine* flutter_instance) {
const static std::string event_channel_name("getFromWinsBrowsingDevice");
const flutter::StandardMethodCodec& codec = flutter::StandardMethodCodec::GetInstance();
flutter::EventChannel event_channel_name_(flutter_instance->messenger(), event_channel_name, &codec);
event_channel_name_.SetStreamHandler(
std::make_unique<flutter::StreamHandlerFunctions<flutter::EncodableValue>>(on_listen, on_cancel));
}
std::unique_ptr<flutter::StreamHandlerError<flutter::EncodableValue>> on_listen(
const flutter::EncodableValue* agruments,
std::unique_ptr<flutter::EventSink<flutter::EncodableValue>>&& events) {
std::thread BrowsingThread(sentBrowsingEvent, std::move(events));
BrowsingThread.detach();
return NULL;
}
void sentBrowsingEvent(std::unique_ptr<flutter::EventSink<flutter::EncodableValue>>&& events) {
//Browsing_Check_routine, &browsing_test
//creat MAP in Browsing_Check_routine to feedback the event to UI
while (1) {
Browsing_Check_routine()
events.get()->Success(flutter::EncodableValue(BrowsingDeviceMap)); // This will fail while send all MAP data
}
std::this_thread::sleep_for(std::chrono::seconds(1));
#endif
}
}
I want to receive some help, how should I fix this error to pass the MAP data to flutter side?
events.get()->Success(flutter::EncodableValue(BrowsingDeviceMap)); //
This will fail while send all MAP data
Thank you!
Edit:
I realize my map is c++ type, if I want to pass the map with the EncodableValue, the variable must declare in EncodableMap type.

Related

How do i send an intent via react native to Print Connect zebra app

I am currently trying to communicate with a Zebra printer via a react-native application, on mobile I am trying to send my ZPL code (instructions for the printer to print the content i want) from my application to the printer via PrintConnect, Zebra also provides a pdf file guiding people on how to communicate to the app via intents available here however the examples dislpayed on the guide are using a different language.
My question then is how would i go about replicating this (Page 96, Passthrough Intent example) :
Intent intent = new Intent();
intent.setComponent(new ComponentName("com.zebra.printconnect",
"com.zebra.printconnect.print.PassthroughService"));
intent.putExtra("com.zebra.printconnect.PrintService.PASSTHROUGH_DATA", passthroughBytes);
intent.putExtra("com.zebra.printconnect.PrintService.RESULT_RECEIVER", buildIPCSafeReceiver(new
ResultReceiver(null) {
#Override
protected void onReceiveResult(int resultCode, Bundle resultData) {
if (resultCode == 0) { // Result code 0 indicates success
// Handle successful print
} else {
// Handle unsuccessful print
// Error message (null on successful print)
String errorMessage = resultData.getString("com.zebra.printconnect.PrintService.ERROR_MESSAGE");
}
}
}));
Into something acceptable by the react-native-send-intent package such as this:
SendIntentAndroid.openApp("com.mycorp.myapp", {
"com.mycorp.myapp.reason": "just because",
"com.mycorp.myapp.data": "must be a string",
}).then(wasOpened => {});
Thank you for the time you took to read my question.

Asynchronous programming for IotHub Device Registration in Java?

I am currently trying to implement the Java web service(Rest API) where the endpoint creates the device in the IoTHub and updates the device twin.
There are two methods available in the azure-iot sdk. One is
addDevice(deviceId, authenticationtype)
and another is to
addDeviceAsync(deviceId, authenticationtype)
I just wanted to figure out which one should I use in the web service(as a best practice). I am not very strong in MultiThreading/Concurrency so was wondering to receive people's expertise on this. Any suggestion/Link related to this is much appreciated
Thanks.
The Async version of AddDevice is basically the same. If you use AddDeviceAsync then a thread is created to run the AddDevice call so you are not blocked on it.
Check the code#L269 of RegistryManager doing exactly that: https://github.com/Azure/azure-iot-sdk-java/blob/master/service/iot-service-client/src/main/java/com/microsoft/azure/sdk/iot/service/RegistryManager.java#L269
public CompletableFuture<Device> addDeviceAsync(Device device) throws IOException, IotHubException
{
if (device == null)
{
throw new IllegalArgumentException("device cannot be null");
}
final CompletableFuture<Device> future = new CompletableFuture<>();
executor.submit(() ->
{
try
{
Device responseDevice = addDevice(device);
future.complete(responseDevice);
}
catch (IOException | IotHubException e)
{
future.completeExceptionally(e);
}
});
return future;
}
You can as well build your own async wrapper and call AddDevice() from there.

How can I use Kotlin to handle asynchronous speech recognition?

The Code A is from the artical https://cloud.google.com/speech-to-text/docs/async-recognize
It write with Java, I don't think the following code is a good code, it make the app interrupt.
while (!response.isDone()) {
System.out.println("Waiting for response...");
Thread.sleep(10000);
}
...
I'm a beginner of Kotlin. How can I use Kotlin to write the better code? maybe using coroutines ?
Code A
public static void asyncRecognizeGcs(String gcsUri) throws Exception {
// Instantiates a client with GOOGLE_APPLICATION_CREDENTIALS
try (SpeechClient speech = SpeechClient.create()) {
// Configure remote file request for FLAC
RecognitionConfig config =
RecognitionConfig.newBuilder()
.setEncoding(AudioEncoding.FLAC)
.setLanguageCode("en-US")
.setSampleRateHertz(16000)
.build();
RecognitionAudio audio = RecognitionAudio.newBuilder().setUri(gcsUri).build();
// Use non-blocking call for getting file transcription
OperationFuture<LongRunningRecognizeResponse, LongRunningRecognizeMetadata> response =
speech.longRunningRecognizeAsync(config, audio);
while (!response.isDone()) {
System.out.println("Waiting for response...");
Thread.sleep(10000);
}
List<SpeechRecognitionResult> results = response.get().getResultsList();
for (SpeechRecognitionResult result : results) {
// There can be several alternative transcripts for a given chunk of speech. Just use the
// first (most likely) one here.
SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
System.out.printf("Transcription: %s\n", alternative.getTranscript());
}
}
}
You will have to provide some context to understand what you are trying to achieve, but it looks like coroutine is not really necessary here, as longRunningRecognizeAsync is already non-blocking and returns OperationFuture response object. You just need to decide what to do with that response, or just store Future and check it later. There is nothing implicitly wrong with while (!response.isDone()) {}, that's how Java Futures are supposed to work. Also check OperationFuture, if its normal Java Future, it should implement get() method, that will let you wait for result if necessary, without having to do explicit Thread.sleep().

Agora many to one live streaming

I have a requirement in which different user will stream videos from their camera to a server and there will be dashboard in which the admin can view all the streams real-time something like how surveillance works? I think video broadcasting can help but the documentation says it enables live streaming from one-to-many and many-to-many but there is no mention of the many-to-one case. How can I achieve this?
The use-case you have described would be implemented the same way as a many-to-many broadcast.
For your use-case you would have all of the camera streams join the channel as broadcasters and then the "surveillance" user would join as an audience. The audience member subscribes to all the remote streams without having to broadcast a stream of their own.
[Update]
With Agora's SDK you can use an external video source, you would just have to manage it yourself. If you are using a custom video source then you don't need to use RTMP.
IVideoFrameConsumer mConsumer;
boolean mHasStarted;
// Create a VideoSource instance.
VideoSource source = new VideoSource() {
#Override
public int getBufferType() {
// Get the current frame type.
// The SDK uses different methods to process different frame types.
// If you want to switch to another VideoSource type, create another instance.
// There are three video frame types: BYTE_BUFFER(1); BYTE_ARRAY(2); TEXTURE(3)
return BufferType.BYTE_ARRAY;
}
#Override
public boolean onInitialize(IVideoFrameConsumer consumer) {
// Consumer was created by the SDK.
// Save it in the lifecycle of the VideoSource.
mConsumer = consumer;
}
#Override
public boolean onStart() {
mHasStarted = true;
}
#Override
public void onStop() {
mHasStarted = false;
}
#Override
public void onDispose() {
// Release the consumer.
mConsumer = null;
}
};
// Change the inputting video stream to the VideoSource instance.
rtcEngine.setVideoSource(source);
// After receiving the video frame data, use the consumer class to send the data.
// Choose differnet methods according to the frame type.
// For example, the current frame type is byte array, i.e. NV21.
if (mHasStarted && mConsumer != null) {
mConsumer.consumeByteArrayFrame(data, AgoraVideoFrame.NV21, width, height, rotation, timestamp);
}
full guide: https://docs.agora.io/en/Video/custom_video_android?platform=Android

Metro c++ async programming and UI updating. My technique?

The problem: I'm crashing when I want to render my incoming data which was retrieved asynchronously.
The app starts and displays some dialog boxes using XAML. Once the user fills in their data and clicks the login button, the XAML class has in instance of a worker class that does the HTTP stuff for me (asynchronously using IXMLHTTPRequest2). When the app has successfully logged in to the web server, my .then() block fires and I make a callback to my main xaml class to do some rendering of the assets.
I am always getting crashes in the delegate though (the main XAML class), which leads me to believe that I cannot use this approach (pure virtual class and callbacks) to update my UI. I think I am inadvertently trying to do something illegal from an incorrect thread which is a byproduct of the async calls.
Is there a better or different way that I should be notifying the main XAML class that it is time for it to update it's UI? I am coming from an iOS world where I could use NotificationCenter.
Now, I saw that Microsoft has it's own Delegate type of thing here: http://msdn.microsoft.com/en-us/library/windows/apps/hh755798.aspx
Do you think that if I used this approach instead of my own callbacks that it would no longer crash?
Let me know if you need more clarification or what not.
Here is the jist of the code:
public interface class ISmileServiceEvents
{
public: // required methods
virtual void UpdateUI(bool isValid) abstract;
};
// In main XAML.cpp which inherits from an ISmileServiceEvents
void buttonClick(...){
_myUser->LoginAndGetAssets(txtEmail->Text, txtPass->Password);
}
void UpdateUI(String^ data) // implements ISmileServiceEvents
{
// This is where I would render my assets if I could.
// Cannot legally do much here. Always crashes.
// Follow the rest of the code to get here.
}
// In MyUser.cpp
void LoginAndGetAssets(String^ email, String^ password){
Uri^ uri = ref new URI(MY_SERVER + "login.json");
String^ inJSON = "some json input data here"; // serialized email and password with other data
// make the HTTP request to login, then notify XAML that it has data to render.
_myService->HTTPPostAsync(uri, json).then([](String^ outputJson){
String^ assets = MyParser::Parse(outputJSON);
// The Login has returned and we have our json output data
if(_delegate)
{
_delegate->UpdateUI(assets);
}
});
}
// In MyService.cpp
task<String^> MyService::HTTPPostAsync(Uri^ uri, String^ json)
{
return _httpRequest.PostAsync(uri,
json->Data(),
_cancellationTokenSource.get_token()).then([this](task<std::wstring> response)
{
try
{
if(_httpRequest.GetStatusCode() != 200) SM_LOG_WARNING("Status code=", _httpRequest.GetStatusCode());
String^ j = ref new String(response.get().c_str());
return j;
}
catch (Exception^ ex) .......;
return ref new String(L"");
}, task_continuation_context::use_current());
}
Edit: BTW, the error I get when I go to update the UI is:
"An invalid parameter was passed to a function that considers invalid parameters fatal."
In this case I am just trying to execute in my callback is
txtBox->Text = data;
It appears you are updating the UI thread from the wrong context. You can use task_continuation_context::use_arbitrary() to allow you to update the UI. See the "Controlling the Execution Thread" example in this document (the discussion of marshaling is at the bottom).
So, it turns out that when you have a continuation, if you don't specify a context after the lambda function, that it defaults to use_arbitrary(). This is in contradiction to what I learned in an MS video.
However by adding use_currrent() to all of the .then blocks that have anything to do with the GUI, my error goes away and everything is able to render properly.
My GUI calls a service which generates some tasks and then calls to an HTTP class that does asynchronous stuff too. Way back in the HTTP classes I use use_arbitrary() so that it can run on secondary threads. This works fine. Just be sure to use use_current() on anything that has to do with the GUI.
Now that you have my answer, if you look at the original code you will see that it already contains use_current(). This is true, but I left out a wrapping function for simplicity of the example. That is where I needed to add use_current().