Is there any public RTP repo where can I download a lot of RTP traces (in addition to the ones in example wireshark traces)? Any suggestions are welcome.
www.wiresharkbook.com Downloads section
http://pcapr.net/browse/voip
Related
I've searched a lot but couldn't find an example.
I want to use nanorframework as a webserver where I can upload and download e.g. a JSON file from the browser which holds all my settings. Is this possible?
Otherwise if I want to change some settings I have to rebuild the whole solution and uploat it.
Thanks in advance
You can use the Storage libraries to store that Json file. The actual storage can support by flash (using SPIFFs), SD card or USB mass storage device. This depends on the hardware platform that you are using. Check the Storage samples in our samples repo here.
Downloading a file is pretty straightforward you just need to serve the respective HTTP request. Check the HTTP samples in our samples repo here.
Uploading a file it's a matter of handling the POST request and grabbing the data being sent by the client browser.
In the gstreamer - streamingtest example
(https://janus.conf.meetecho.com/streamingtest.html)
a gstreamer pipe is sending to udpsink host=127.0.0.1 port=5004, which then is broadcasted via webRTC in Janus.
how is it possible to send a webcam-stream from another user through his browser getUserMedia() to Janus-Gateway for broadcasting?
Do i have to configure a pipe for it as well and how would that look like?
I have installed Janus and i am able to run all the Demos.
there is a rtp_forward request possible against the videoroom which would forward the rtp from a publisher in that room to the streaming plug-in or any other ip.
it was added here:
https://github.com/meetecho/janus-gateway/pull/255
instead of rtp_listen though, you should request rtp_forward and also pass in the secret.
(this solution needs a browser, but I marked it as right solution since it works for me this way and also scaling users is possible like this)
I want to let WebRTC encoded and play h264(NAL) stream(local file).
In the WebRTC tutorial, getUserMedia is use for get local camera connecting to the system, I don`t know if the getUserMedia function support
capture the local stream file like h264 stream.
If it doesn't work that way, may be I should modify WebRTC source code(I'm studying it).
Here is the question, If i change WebRTC code, how can i integration the new code into browser? Made it a plugin?
Firefox supports an extension to the <video> element that you can use to do this.
First, set the source of a video element:
v1.src = "file:///...";
Then you can call the (currently prefixed) mozCaptureStream or mozCaptureStreamUntilEnded function to get a MediaStream.
stream = v1.mozCaptureStream();
The proposed specification.
Note however that you need to ensure that the file is same origin with respect to the page. The same origin rules for file:/// are probably going to cause issues. Otherwise your MediaStream isn't going to be accessible to you. One way to ensure that is not to set the location directly, but to load the file using an <input type="file"> element.
As noted in other answers, Firefox currently only supports the baseline profile of H.264.
First, you are right getusermedia will not work for you. However, there are a couple of options.
Hack a stream together using RTCDataChannel. Breaking up the media stream and delivering each packet and then handling it on the client side.
Take a look at this demo for precorded media streams. I do not believe that H264 is addressed but it could help you on your way(probably for Firefox only)
Use some sort of webrtc breaker/endpoint that is native to stream the file. I know specifically that others(including myself) have streamed H264 to Firefox through the Janus-Gateway
Couple of asides:
Firefox only supports Baseline profiles in streaming h264 for a webrtc peerconnection
Chrome does not support h264 for webrtc at all
Are you trying to have getUserMedia return a h.264 encoded stream?
In which case, today it will only be possible with Firefox today, under some specific environment (cisco 264 plugin installed) and only for the base profile.
Chrome promised in november to add this capacity, but there is no timeline that I know of Expect at least Q2 2015.
Using our (temasys) commercial plugin you will soon be able to do that in IE and Safari.
Those are the only options on client side I can think of. On server side you can use whatever you want to transcode, including janus, kurento, powermedia, licode/lynkia, ....
Note: using other means like Datachannel or WebSocket are ok to transfer files, but would greatly reduce the user experience as you would not have all the added recovery (and security) mechanisms included in SRTP, DTLS, and would also not have specific mistreated media enhancements that are in webRTC like jitter, buffers, netQ, ect ...
CRTMP Server is great tool... Nat Traversal when client is behind router working great.
...
Tested Android 2.2, 2.3, 4.1, RTSP streaming ok (rtmp flash also ok).
But RTSP on RealPlayer (Helix DNA 10.0 onS60) always shows 'can not play media; or 'can not connect' (connection is surely established - checked with wireshark).
...
(it is programming related problem, because i am willing to explore CRTMP code to accomplish solution)
BBC RTSP channel (wowza is behind) is showed well in symbian realPlayer, but streaming source:
ffmpeg -i rtsp://[bbc_channel_address] -c copy -f rtsp rtsp://[crtmp_server_addr]:8554/ch
... is ok for android, but not working for realPlayerS60.
Does anybody have a clue about reason?
Having RTSP working is not enough. Different phones have different requirements in terms of A/V codecs quality. The content you want to deliver may be too high quality for that device. This assumption fits well with what you said (working on android, but not on realPlayer)
You can ask this questions over the crtmpserver's mailing list. Consult http://rtmpd.com/resources/ for details about the mailing list
Edit:
Or the codecs you try to push from crtmpserver towards the phone are not even supported, let alone hitting the quality max limit
I m publishing a stream on red5 using microphone on client side as3 code . but it not published good stream but the same thing i m doing on FMS it creates perfect stream
I need to be understand what is the issue during publish on red 5 .
Read Red5 documentation for that. And ofcourse there are differences between the performances of the two servers. However if you want to improve the quality of stream you can use FFMPEG or Xuggler with Red5 to encode streams.
Because you are not saying what your encoder is, it is hard to give a clear answer. If you are using Adobe's FMLE to create the stream that goes to your FMS server, it is the FMLE that explains why you have good video and audio encoding 'out-of-the-box'.
I have never tried to use FMLE with RED5, so I cannot tell you if it works, but doubtful it works out-of-the-box. It probably can work with a bit of tweaking on both client and server side.
To use your own encoder, what you do is capture two streams using ffmpeg, a great example on how to do that is on stackoverflow here.
Once you are capturing, you can use ffmpeg to send the combined audio and video streams to a file, or you can send it directly to your red 5 server. A simplified version of the ffmpeg command to show mapping two streams to give a single rtmp output is shown below. ffmpeg -i video_stream -i audio_stream -map 0:0 -map 1:0 -f flv rtmp://my.red5.server:1935/live/mystream