Using DJI Windows SDK to display/decode a video streamed over UDP/RDP - udp

I am wondering if anybody knows if it's possible to use the DJI Windows SDK to decode a video in real-time (render video frames as the video is being retrieved frame-by-frame)?
I don't see anything relevant in the documentation or API reference sections from DJI Windows SDK.
At this point i'll have to dig into the Samples and see if there is anything useful there. Otherwise the online documentation seems rather useless.
Here is the DJI Windows SDK documention.

I agree with you that DJI documentation sucks. But again what you are asking is unclear.
use the DJI Windows SDK to decode a video. So u got an-online video and you want to decoded it. Why not use ffmepg and ffplay???? We use that for DJI tello and IP camera all the time.
If you want to grab the feed from the drone, there are DJI github sample that shows you how. https://github.com/DJI-Windows-SDK-Tutorials/Windows-FPVDemo/tree/master/DJIWSDKFPVDemo
So not 100% sure whats your use case.

Related

Unable to stream in stereo

I am using a 3d mic that works like a charm on the iPhone using 1/8th jack into an adapter. It works great with the camera app so I know the hardware is able to receive the stereo.
However in my agora.io iPhone app I have the following settings:
audio.setChannelProfile(.liveBroadcasting)
audio.setAudioProfile(.musicHighQualityStereo, scenario: .showRoom)
Is there anything else I need to do for it to work?
I was able to reach Agora Support. The following answer is what I received:
iOS devices does not support stereo audio capture. You would need to use external video source which support stereo audio to do the capture.
I wish this were included in the iOS documentation.
For my use case, a Mac app would be better, so I'm just going to go with that!

How live audio streaming and downloading to happen simultaneously in IOS using Objective c

I am creating one application with live audio streaming, I wanted live audio streaming and downloading to happen simultaneously (like WhatsApp files). Please let me know if someone can help me or guide me.
You can insert AVAssetResourceLoader between your AVPlayer and The Internet to allow caching while you stream.
Here are some examples I found on github:
AVAssetResourceLoader
VIMediaCache

How to Capture Live Stream from camera and watch live or save for later viewing?

I am trying to make an online examination portal. When students start the exam, their webcam will start automatically and record the stream live and store in the server. Invigilators will either watch the students live or they can watch the saved live streams later.
I researched about this and found WebRTC as a possible solution along with a gateway server like Kurento. But later found out that WebRTC is not supported in Safari, which is a setback! My application should run successfully in web portal in any modern browsers which includes safari and also in android or iphone.
So can anyone suggest a possible solution to my problem? Which technology should I use that can support all browsers and OS?
Also, it would be helpful if you can provide links to good documentation or tutorials.
Note from the future (2020): This answer really isn't accurate anymore.
WebRTC is one problem... capture from the camera with getUserMedia is another. Safari doesn't support either.
There is no video capture API in Safari currently. The only thing you can do is make a native app for iOS.
Worse yet, because of Apple's restrictive policies, alternative browsers, such as Chrome, are crippled on iOS as they aren't allowed to use their own browser engines.
Use standards based technologies like getUserMedia and WebRTC for your primary web-based application. If you decide that the economics of your situation enable it, you can make an iOS app to work alongside until Apple decides to participate in modern browser standards like everyone else.
You can use Mediadevices.getUserMedia (https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia) to capture webcam stream on browser (chrome and firefox).
To play with webcam stream on safari, you would have to use a pollyfill - https://github.com/Temasys/AdapterJS
To record the video/audio stream, you can make use of Media recorder api https://developer.mozilla.org/en-US/docs/Web/API/MediaRecorder (Note : recording stream is still a challenge in Safari as there is no support/pollyfill. However, it works perfectly on Chrome and Firefox latest versions).
Helpful demonstrations :
https://webrtc.github.io/samples/
https://mozdevs.github.io/MediaRecorder-examples/index.html
https://codepen.io/collection/XjkNbN/
https://hacks.mozilla.org/2016/04/record-almost-everything-in-the-browser-with-mediarecorder/

Sony Camera API change exposure parameters?

Looking at the supported API's for various camera both within the PDF docs and the supported cameras page (https://developer.sony.com/develop/cameras/), it looks like the A7R supports changing f stop, shutter speed, ISO, etc. However, within the iOS sample app none of those commands are provided by the camera as being available via the getAvailableApiList command. "Forcing" a setIsoSpeedRate command to the camera didn't work either and I get an error code 403: Forbidden.
So is this supported? Am I missing something? Does the camera need to be configured in a particular way? Of note, I am running firmware 1.10 on the A7R, which I believe is the latest.
Please install the latest version (ver.3.10) of “Smart Remote Control” application of PlayMemories Camera App and try it again.

Lot of noise in webrtc audio/video

I have developed a video chat app with webrtc api. I have fallowed the steps given by webrtc. Video working fine. But there is a lot of noice from my laptop. sound is not clear.
But in google developed demo site https://apprtc.appspot.com/ works with out any noise(better compare with us).
I fallowed the same procedure what they did. But no luck.
But in headset this echo is not hearing. This happens when we haering the sound from laptop without headset.
Please give me some suggestion on this.
Thanks in advance. Looking foward for the response.
Give a look at this demo webrtc conferencing supports four callers. This link describes the implentation details architecture