When having a p2p webtrc connection, how to use different resolutions for the video call and the photo capture? - webrtc

I'm working on a p2p webtrc video call between HoloLens2 and PC. I also need to support the capturing of photos(and send photos to the server). Now the video and photo can be supported with a resolution of 2272x1278, but I need the photo resolution of 3904x2196(the highest value that HoloLens2 provides).
The problem is when I am trying to change the resolutions, I found I had no limit when the call continues.
I use MediaCapture to take a photo. And the WebcamSource based on MixedReality-WebRTC running on the SharedReadOnly mode. I thought of one way to solve this: shut the call down when taking a photo, and restart it after capturing finished. But the problem is
How can I set the mode to exclusive WebcamSource when just capturing the photo?
Can I make sure when the call had been shut down, the WebcamSource is released?
Or if there is another way to use different resolutions for the video call and the photo capture? Thanks a lot.

How can I set the mode to exclusive WebcamSource when just capturing the photo?
No, SharingMode has hardcoded in the UwpUtils and does not expose any API to access.
Can I make sure when the call had been shut down, the WebcamSource is released?
To make sure dispose of audio and video tracks and media sources last, please reference the following code:
localAudioTrack?.Dispose();
localVideoTrack?.Dispose();
microphoneSource?.Dispose();
webcamSource?.Dispose();

Related

Take pictures using camera shutter when receiving live view data

I am developing using EDSDK.
However, if I press the shutter of the camera (physical button) while receiving live view data(EVF Mode), the picture won't be taken. Is this normal?
My camera model is 200D II.
What I'm trying to do is as follows, and it's very simple.
My software activates the camera through EDSDK and receives live view data.
The person behind the camera takes a picture by pressing the camera shutter, and my software shows the picture on the screen.
The questions are as follows.
How to take pictures using physical camera shutter buttons when receiving live view data(EVF).
HDMI connections are not considered because there are features that need to be controlled directly through EDSDK.
Thank you.
Below is what I added after Johannes Bildstein's answer.
As Johannes Bildstein answered, the following code was inserted to unlock the UI.
But it still hasn't been solved.
if (!MainCamera.IsLiveViewOn) {
MainCamera.StartLiveView();
MainCamera.UILock(false);
}
Error message occurs when I try to unlock UI and get EVF data. (Shutter still doesn't work)
If I unlock UI after receiving EVF data,
When dial is in photo mode: EVF data is coming in, but the shutter still does not work.
When dial is in video mode: EVF data does not come in due to BUSY error. Is it a conflict due to UI unlocking? We check your answers and SDK documents and try in many ways, but they are still unresolved. We are currently testing the more recent model, 200D II.
"| EvfOutputDevice.Camera " should be added like below!
public void StartLiveView()
{
CheckState();
if (!IsLiveViewOn) SetSetting(PropertyID.Evf_OutputDevice, (int)(EvfOutputDevice.PC | EvfOutputDevice.Camera ));
}
The camera "UI" is probably locked. The EDSDK does that automatically when connecting and before doing certain commands. You can unlock it with EdsSendStatusCommand using kEdsCameraStatusCommand_UIUnLock.

DSC-HX400 RAW image data & Movie Recording

I am currently testing a DSC-HX400. While I am able to do almost everything I need to with the camera there are a couple of items that are not exposed via the API that have frustrated my efforts.
1) The camera does not seem to offer an option, via the API or the camera itself, to capture images in RAW format. It does offer standard & fine JPEG format but both of those are leaving artifacts in the image that become extremely noticeable when you zoom in with an image editor. Is there a way to get the camera to capture RAW images? I do not need the SDK to return the data just to save it out to the card. If getting the RAW data is impossible has anyone found an inventive way to clean up the artifacts?
2) The camera supports both still shoot and movie mode but the API will only expose the mode that I am currently in. It makes it impossible to transition between still to movie mode (to allow recording) from the API but I can do that same transition by pressing a single button on the camera. Once I am recording a movie the API will allow me to transition back to still mode (by cancelling recording). Is there plans to support the ability to trigger a movie recording via the API if you are in a still capture mode (Seeing the firmware already supports this functionality)?
Answers to the questions below:
If the camera cannot capture RAW images, the API will not be able to either. I do not know of a way to capture RAW images but can only comment with regards to the API as I am not an expert on usage of the camera itself.
You can change between still and movie mode by using the "setShootMode" API.

how to connect disconnect the camera device using getUserMedia and webRTC

I am creating an audio/video and chat application using webRTC and Node.js. I need to mute and unmute the camera device.
Presently, I am able to disconnect and the other party is not able to see me, but the problem I see is that it doesn't disconnect the camera. It still remains active and connected as I see the camera flash still on.
I need help how to disconnect when muted and connect it back when unmuted. I want the same feature as we see in skype video call.
It varies a bit between Firefox and Chrome. These steps, in this order, work for me.
1) Set the src property on your video element to empty string ''.
2) Make sure the stop method exists before calling it as a function. Firefox doesn't have it, and if you try to run it, your code will throw an error.
if (localStream && localStream.stop) {
localStream.stop();
}
3) After you call cameraStream.stop() (or not), set localStream = null. (Maybe not actually necessary, but it couldn't hurt to let the object get garbage-collected. And when the user asks to start the camera up again, you can check against the variable to see if you need to clean up after the previous stream before starting a new one.)
When you are getting your media, in your success callback function you have to keep your localstream in a variable. Then, when you want to stop your stream, you can do localstream.stop();
To start again, you can repeat to call your getUserMedia() method again.

I cannot get a QTCaptureSession to Capture when in a Terminal Application

I've got a terminal application that needs to take a webcam picture and then perform some processing on it. I'm having trouble getting it to initialize. There's a fairly complete demo with an app called MyRecorder in the Apple docs that uses QTKit, which I was able to make work fine. I was also able to modify it to grab a single frame instead of a stream.
When I move this to a terminal application, the startRunning of the QTCaptureSession command simply does nothing. There are no errors, and everything reports as successful, but my webcam doesn't light up, and no frames are captured.
Any idea what's going on here? Are there any kind of security restrictions, or other kinds of restrictions that would prevent the QTCaptureSession from working?
So switching to AVFoundation solved my problem. I'm still not certain what the issue is, but for now using AVFoundation seems like the way to go since it was designed to replace QtKit anyways.

block ipad camera

Is there any way in which the usage of the camera of the iPad2 can be restricted only to my application? even if it is using i tunes.
could not find any code related to it. some code would be helpful.
There's no way to achieve this. I think it could be done with quite a bunch of hacking if you were developing for Cydia, but I'm not sure ever then. If the user quits your application or switches from it, the system will make the camera available to any other app requesting it.