How to send a texture with Agora Video SDK for Unity - agora.io

I'm using the package Agora Video SDK for Unity and I have followed these two tutorials:
https://www.agora.io/en/blog/agora-video-sdk-for-unity-quick-start-programming-guide/
https://docs.agora.io/en/Video/screensharing_unity?platform=Unity
Up to here, it is working fine. The problem is that instead os sharing my screen, I want to send a texture. To do so, I'm loading a png picture and trying to set it to the mTexture you find in the second link. It seems to be working on my computer, but it is like it doesn't arrive to the target computer.
How can I send a texture properly?
Thanks

did you copy every line of the code from the example as is? You may not want to do the ReadPixel part since this reads the screen. You may just read the raw data from your input texture and send it with the PushVideoFrame every update.

Related

Image Detection for expo react native

I would like to include a feature in my app, where you can scan a certain picture and the app would recognise this image. Just like the image recognition feature in Viro: https://docs.viromedia.com/docs/ar-image-recognition . So I would set a certain image and it only needs to recognise this image.
I'm working with expo react native.
Does anybody have and idea how I might build this feature?
Thanks 😊
You can possibly just make it call a python service on the backend and pass it the image and get back the result for it.
If you want to go serverless, you can even get a premade AWS lamda function, which you can call and pass it the image, and it'll do the processing for you.
In the end, it's better if you do this kind of processing on the server side, your app could get locked up because image processing can take time and you don't want that to happen on a mobile app.

Deleting content from Sony a6300 camera

I am developing a python remote control application using the open source pysony library. The program should be able to shoot images in a loop, download them and then delete the artifacts on the camera (We can't manually format the sd card, since the idea is that, of a remote control application).
I'd like to point out, that I have read this post about the correct way of remotely deleting files. I have read the Sony API documentation and have successfully managed to control everything I need to, but the deletion of images. The camera in question is a Sony a6300, updated to the latest firmware as is its API.
The problem in question is the fact that the camera returns a success response ({'result': [], 'id': 1}) after trying to delete a set of image URIs, but the images still remain on the camera. I am using the remote control app and am connected directly to the camera wifi (making this the standard 1:1 connection). When I issue the delete command, the screen of the camera shortly displays a "controlling with smartphone... you cannot directly operate this device" message.
I have searched all around the www an can't seem to find and aswer.
Thank you in advance!

DSC-HX400 RAW image data & Movie Recording

I am currently testing a DSC-HX400. While I am able to do almost everything I need to with the camera there are a couple of items that are not exposed via the API that have frustrated my efforts.
1) The camera does not seem to offer an option, via the API or the camera itself, to capture images in RAW format. It does offer standard & fine JPEG format but both of those are leaving artifacts in the image that become extremely noticeable when you zoom in with an image editor. Is there a way to get the camera to capture RAW images? I do not need the SDK to return the data just to save it out to the card. If getting the RAW data is impossible has anyone found an inventive way to clean up the artifacts?
2) The camera supports both still shoot and movie mode but the API will only expose the mode that I am currently in. It makes it impossible to transition between still to movie mode (to allow recording) from the API but I can do that same transition by pressing a single button on the camera. Once I am recording a movie the API will allow me to transition back to still mode (by cancelling recording). Is there plans to support the ability to trigger a movie recording via the API if you are in a still capture mode (Seeing the firmware already supports this functionality)?
Answers to the questions below:
If the camera cannot capture RAW images, the API will not be able to either. I do not know of a way to capture RAW images but can only comment with regards to the API as I am not an expert on usage of the camera itself.
You can change between still and movie mode by using the "setShootMode" API.

Has anyone been able to use web images with the Tiles and Badges App sample?

I've been trying to get my app's tile to display an image from the web, but couldn't get it to work. I then tried the Tiles and Badges app sample, where in scenario 3 you can send a tile notification that uses a web image. No matter which image url I paste in the text box, the tile refuses to get updated. So apparently, the sample isn't working either, or something is very wrong.
The images are all smaller than 1024x1024 and less than 200KB. Fun fact: if I download one of the images I unsuccesfully tried to feed the sample, add it to the project and then send a notification using it as a local image, the tile gets updated. So apparently the image isn't the problem.
Has anybody been able to get this working? I don't get what I'm doing wrong.
Do have internet permission ticket in app manifest? Maybe only your app dont have permission to download your image from web.

I cannot get a QTCaptureSession to Capture when in a Terminal Application

I've got a terminal application that needs to take a webcam picture and then perform some processing on it. I'm having trouble getting it to initialize. There's a fairly complete demo with an app called MyRecorder in the Apple docs that uses QTKit, which I was able to make work fine. I was also able to modify it to grab a single frame instead of a stream.
When I move this to a terminal application, the startRunning of the QTCaptureSession command simply does nothing. There are no errors, and everything reports as successful, but my webcam doesn't light up, and no frames are captured.
Any idea what's going on here? Are there any kind of security restrictions, or other kinds of restrictions that would prevent the QTCaptureSession from working?
So switching to AVFoundation solved my problem. I'm still not certain what the issue is, but for now using AVFoundation seems like the way to go since it was designed to replace QtKit anyways.