How to stream video in iOS app from GoPro HERO4? - udp

How to stream video in iOS app from GoPro HERO4? I am able to steam using GoPro HERO3. I have tried several ways but failed to stream.

To start streaming you need to execute the URL:
http://10.5.5.9/gp/gpControl/execute?p1=gpStream&c1=start
To stop:
http://10.5.5.9/gp/gpControl/execute?p1=gpStream&c1=stop
Next you can download streaming data using port UDP 8554.
Unfortunately this solution working only with old software version. With last one 02.00.00 not working.

Related

Recording google meet through Webrtc

I am trying to record google meet audio and video using webRtc. I found mediaRecorder Api for same. But i am unable to record meeting, as it is only capturing my webcam and audio. How can i record a whole meeting through code in any language?
One solution is to use Neko to record conferences. It is like jibri but works for all any website!
It is a headless Chromium instance that runs on a server. You launch the neko instance and then join the conference call. You then can capture the web page's audio and video.
You can specify a RTMP output (Twitch, Youtube...) or you can save the files.

Unable to stream in stereo

I am using a 3d mic that works like a charm on the iPhone using 1/8th jack into an adapter. It works great with the camera app so I know the hardware is able to receive the stereo.
However in my agora.io iPhone app I have the following settings:
audio.setChannelProfile(.liveBroadcasting)
audio.setAudioProfile(.musicHighQualityStereo, scenario: .showRoom)
Is there anything else I need to do for it to work?
I was able to reach Agora Support. The following answer is what I received:
iOS devices does not support stereo audio capture. You would need to use external video source which support stereo audio to do the capture.
I wish this were included in the iOS documentation.
For my use case, a Mac app would be better, so I'm just going to go with that!

audio tracks are missing in webrtc call upgrade

We are facing a strange issue in webrtc call i.e. in a connected webrtc audio only call when some one upgrade the call(add video) the audio tracks will drops from orignator side.
steps to reproduce the problem
1.make a audio only call between two peers A and B.
2.updgrade call to video by calling getUserMedia again from peer A.
3.call established.
4.A can hear audio and view video.
5.B cant hear audio.
What is the expected result?
onaddstream(e) e.stream should contain both audio and video tracks
What do you see instead?
only video track is there at B's side (recipient)
What version of the product are you using? On what operating system?
Chrome 51/WIndows7
Please find the webrtc dump from below link
Webrtc dump

All the examples of WebRTC are video chat, is it possible to send any type of video over WebRTC?

So I want to be able to send a normal video from a video file (AVI or any other) through WebRTC, can that be done? The only examples I see of WebRTC are video chats, so I feel as if its only geared towards webcam and chats.
So my question is, technically can sending normal video from a video file (not webcam) over WebRTC be done?
Try: "Pre-recorded media streaming" --- Documentation and Source Code.
This experiment uses MediaSource API to render Blobs in <video> element. This experiment has some issues need to be fixed e.g. it can't send longer WebM videos.
You can try this experiment as well.
The codecs typically used in AVI are not directly supported by WebRTC clients, but if you are writing your own standalone client then of course it could read an AVI or other video file and transcode it to VP8 video and Opus audio (or whatever other codecs you were able to negotiate), and transmit it via RTP. If you are trying to do video transcoding in JavaScript in a browser then that will be very slow.

Creating a WebRTC receiver

I am new to WebRTC and trying to figure out how to create a program outside a browser which receives a WebRTC audio stream and outputs it on speakers.
Are there any WebRTC libraries for Java or C#?
That receiver will be running on a linux machine.
--
I've been thinking about using getUserMedia() to access the microphone. But then:
In what format will such a stream be transmitted?
Let's say I use WebRTC2SIP and build a Java endpoint using JSIP;
or I just use a socket and send the stream over http.
What audio format will I get on the receiver side? So far I have read WebRTC does compress the stream somehow.
I guess there are two ways for you:
build the whole WebRTC voice engine for android/iOS or Mac etc., and just use the API provide by VOE.
build standalone NS/VAD/AECM/AGC modules and using it in your project. for example, you build standalone NS module for android mobile, you use AudioRecord(java layer, android things) to record sound from MIC, and do the noise suppression process on these data(jni layer, WebRTC things), and finally playback the processed data by using AudioTrack(java layer, android things).
EDIT:
for the 2nd situation, the format is PCM raw data.
Check out the working Audio demo and code at demo.easyrtc.com
The code is all open source and can be checked out at https://github.com/priologic/easyrtc
You can look for any known issues around easyRTC at our forum at
https://groups.google.com/forum/#!forum/easyrtc
Also check out our main site at easyrtc.com