CRTMP Server is great tool... Nat Traversal when client is behind router working great.
...
Tested Android 2.2, 2.3, 4.1, RTSP streaming ok (rtmp flash also ok).
But RTSP on RealPlayer (Helix DNA 10.0 onS60) always shows 'can not play media; or 'can not connect' (connection is surely established - checked with wireshark).
...
(it is programming related problem, because i am willing to explore CRTMP code to accomplish solution)
BBC RTSP channel (wowza is behind) is showed well in symbian realPlayer, but streaming source:
ffmpeg -i rtsp://[bbc_channel_address] -c copy -f rtsp rtsp://[crtmp_server_addr]:8554/ch
... is ok for android, but not working for realPlayerS60.
Does anybody have a clue about reason?
Having RTSP working is not enough. Different phones have different requirements in terms of A/V codecs quality. The content you want to deliver may be too high quality for that device. This assumption fits well with what you said (working on android, but not on realPlayer)
You can ask this questions over the crtmpserver's mailing list. Consult http://rtmpd.com/resources/ for details about the mailing list
Edit:
Or the codecs you try to push from crtmpserver towards the phone are not even supported, let alone hitting the quality max limit
Related
I'm trying to stream a microphone/audio to multiple clients.
the broadcaster is a screenless raspberry, so I can't open a Webbrowser and click on "share mircophone"
The clients will be using their smartphone to listen.
the latency must be super low.
I did not find any WebRTC Demo that worked. All of them are either p2p or the scalable Broadcasting from muaz khan is only working for the initiator; not clients.
I came across Janus (which I didn't really understand what exactly this is doing) but I don't get how to install this and how to configure it.
Is there any way to easily share the microphone's output via WebRTC? Something like Apache hosting a simple website where the microphone audio is hosted on?
Thanks for all the ideas on how to solve it!
Is there any way to easily share the microphone's output via WebRTC?
No. There's nothing easy or simple about WebRTC.
the broadcaster is a screenless raspberry, so I can't open a Webbrowser and click on "share mircophone"
This is the simplest option... running a browser. Are you sure you need to actually allow it to access the audio device?
In the past, I've used a flag on Chromium to get around this problem. I don't remember exactly what that flag was, but looking at the list, it might have been...
--use-fake-ui-for-media-stream
You might also be able to use --enable-kiosk-mode.
At a minimum, if you were to open the browser interactively and enable access, that page would get automatic access in the future.
I did not find any WebRTC Demo that worked. All of them are either p2p
WebRTC is peer-to-peer, but remember that the "server" can be one of those "peers".
Finally, you can look into using GStreamer, but don't expect anything quick and easy. https://github.com/centricular/gstwebrtc-demos
I have a product that can analyze video after inputting an rtsp url.
I would like to use a webcam to stream and feed my product the webcam rtsp.
How can I do that?
It will depend on the webcam you are using - most support RTSP but many do not publish the interface to access the stream as they are designed to be used with the webcam's own companion app.
There are some web resources which provide the RTSP urls for common web cams - you may find it hard to find a match as new versions of webcams roll out but it should give you a feel how to try accessing a vendors camera if you have a specific web cam you are testing against. Some examples (at the time of writing):
https://www.getscw.com/decoding/rtsp
https://soleratec.com/get-support/rtsp/
If you can't find the info for the camera you are using, and you have the companion app, you can also use a network sniffer tool like Wireshark (https://www.wireshark.org) and try to search the traffic for 'rtsp://' pattern.
If you just need to test your app and have access to a raspberry pi with a camera module you can also use this to generate an RTSP stream - there are several approaches for this but one I have found reliable is the v4l2rtspserver server:
https://github.com/mpromonet/v4l2rtspserver
There are specific instructions for setting it up on PI (https://github.com/mpromonet/v4l2rtspserver/wiki/Setup-on-Pi) and you can also verify it is working using VLC player on a laptop etc before testing in your specific application.
There are also a small number of test RTSP urls available on the web - the most reliable seem to be the one at this link provided by Wowza (again, link valid at time of writing):
https://www.wowza.com/html/mobile.html
I am trying to make an online examination portal. When students start the exam, their webcam will start automatically and record the stream live and store in the server. Invigilators will either watch the students live or they can watch the saved live streams later.
I researched about this and found WebRTC as a possible solution along with a gateway server like Kurento. But later found out that WebRTC is not supported in Safari, which is a setback! My application should run successfully in web portal in any modern browsers which includes safari and also in android or iphone.
So can anyone suggest a possible solution to my problem? Which technology should I use that can support all browsers and OS?
Also, it would be helpful if you can provide links to good documentation or tutorials.
Note from the future (2020): This answer really isn't accurate anymore.
WebRTC is one problem... capture from the camera with getUserMedia is another. Safari doesn't support either.
There is no video capture API in Safari currently. The only thing you can do is make a native app for iOS.
Worse yet, because of Apple's restrictive policies, alternative browsers, such as Chrome, are crippled on iOS as they aren't allowed to use their own browser engines.
Use standards based technologies like getUserMedia and WebRTC for your primary web-based application. If you decide that the economics of your situation enable it, you can make an iOS app to work alongside until Apple decides to participate in modern browser standards like everyone else.
You can use Mediadevices.getUserMedia (https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia) to capture webcam stream on browser (chrome and firefox).
To play with webcam stream on safari, you would have to use a pollyfill - https://github.com/Temasys/AdapterJS
To record the video/audio stream, you can make use of Media recorder api https://developer.mozilla.org/en-US/docs/Web/API/MediaRecorder (Note : recording stream is still a challenge in Safari as there is no support/pollyfill. However, it works perfectly on Chrome and Firefox latest versions).
Helpful demonstrations :
https://webrtc.github.io/samples/
https://mozdevs.github.io/MediaRecorder-examples/index.html
https://codepen.io/collection/XjkNbN/
https://hacks.mozilla.org/2016/04/record-almost-everything-in-the-browser-with-mediarecorder/
I am trying to use a RTSP stream from an IP camera as video input source for various applications on Windows (eg. Skype, Zoom, Microsoft Teams, etc.).
The only solution I have found so far is using "webcam 7", an application that fetches an RTSP stream and creates a virtual webcam driver that registers in system as webcam and that any application can then use. Unfortunately, this application often becomes unstable and might crash randomly.
Are there any alternative/better ways for achieving this?
Create your own DirectShow video capture filter (there are lots of examples - this is a great one) and handle the RTSP stream inside it. This way you can implement the stability yourself.
I know this is a bit old question.
But you can also have look at vlc2vcam, looks promising.
Try Moonware Universal Source Filter from http://netcamstudio.com. The only drawback is that it creates only video "composite" device that sends both video + audio and Skype can only see the video (I think most of applications does the same).
If I find an easy way to split that stream will post it here.
You can easily do it on Ubuntu, Debian, Raspian, and Ubuntu Linux for Windows subsystems using the following method,
Installing required libraries, v4l2loopback-dkms and ffmpeg:
sudo apt install v4l2loopback-dkms
sudo apt install ffmpeg
Emulate a video device:
sudo modprobe v4l2loopback card_label="Webcam Stream Name" exclusive_caps=1
Streaming from RTSP uri to the created virtual device:
ffmpeg -stream_loop -1 -re -i rtsp://uri -vcodec rawvideo -threads 0 -f v4l2 /dev/video0
You can replace the '0' at the end of /dev/video0 with the number of the available and playable video device.
I am working on a Chrome packaged app that requires multicast communication over the local network and is specifically targeting Chromebook users. The 'Network Communication' documentation on the packaged app site is outdated and the chrome.socket API documentation is lacking. I was able to get some idea of how to get multicast working by looking through Chrome's 'multicast' sample app (https://github.com/GoogleChrome/chrome-app-samples/tree/master/multicast).
I tested my app by loading it into the Chrome browser on my Mac and everything worked great. I loaded it onto my Chromebook and multicast did not work. I then tried the 'multicast' sample app on my Mac and Chromebook with the same result. The 'multicast' sample app is a chat app. When loaded on both computers on the same network, everything works as expected on the Mac - I can send chat messages out and receive chat messages. On the Chromebook I can send chat messages but not receive them - including the ones that the Chromebook sent.
According to this post - Chrome Sockets API Behaves Differently on Chrome OS (vs. Ubuntu, Windows)? - it looks like Chromebook has a restricted firewall that is blocking UDP packets. I followed the instructions on the post to turn on developer mode and allow UDP packets and that allowed my app to run as expected, but that is not a solution for me. I can't expect Chromebook users to run in developer mode to run my app.
Anyone know if it is possible to allow UDP packets on the Chromebook without going into developer mode? Is there an undocumented permission I can add to my manifest to override the Chromebook UDP restrictions (this seems possible since the 'udp-multicast-membership' permission included in the 'multicast' sample app is undocumented)? This seems like a long shot but chrome.socket.create can be given optional socket options which don't appear to be documented anywhere. Maybe there is something I can add there? And why have Chrome sample apps that don't run on a Chromebook???
---UPDATE---
In case someone is having a similar problem, it looks like this has already been filed as a bug and been looked at just within the past couple of weeks.
https://code.google.com/p/chromium/issues/detail?id=275737
I have no idea when it will actually make its way to a Chromebook update.
Regarding:
I have no idea when it will actually make its way to a Chromebook
update.
usually ChromeOS follows the same release schedule as desktop chrome, with new stable versions coming out about every 6 weeks. But because of the holidays in December when lots of people are on vacation, there is sometimes a hiccup. You can give the dev channel a try and see if the fix showed up there yet: https://support.google.com/chromebook/answer/1086915?hl=en