Get a stream of a remote camera - camera

I need to start a live stream in a remote computer connected to a webcam,
then connect to that remote ip address and see the live stream, like a security webcam more or less.
On my client i want to be able to see the stream in my browser.
What I've tried so far:
VLC on the remote pc: I start the stream (MMS, HTTP or RSTP) and then I encapsulate the stream as object in a html page.
This works, but I have a high latency and not all the browsers support x-vlc-plugin.
WebRTC. This seemed to me the best solution. Direct stream, very low latency.
I tried all the solutions I found in internet, that also integrate node.js. I tried also to build some code myself but the problem is that:
I start the stream on the "server", the remote pc.
When i go to the client, I type in the browser the ip address and port of the remote PC. In theory I should be able to see the REMOTE stream, but instead the browser asks for permission to use my LOCAL camera!
Do you have some hints or solutions about? What am I doing wrong?
Last project I tried:
https://github.com/xat/webcam-binaryjs-demo
In this project:
https://webrtc.github.io/samples/src/content/peerconnection/multiple-relay/
the developer uses a relay of the stream.
Buttons work but I don't know how to use this, that is how to catch the relay and display it on the client.
Thank you for your suggestions.

webRTC has three common API
getUserMedia : for communication and streaming between camera/mic with browser (request permission for access to camera/mic)
https://developer.mozilla.org/en-US/docs/Web/API/Navigator/getUserMedia
RTCDataChannel : data channel for send/receive any type of data on connection
https://developer.mozilla.org/en-US/docs/Web/API/RTCDataChannel
RTCPeerConnection : for creating peer-to-peer connection
https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection
you don't need getUserMedia
find getUserMedia() , this method send access request for camera and microphone to user , you can set both boolean false , or remove it carefully
navigator.getUserMedia({
video:false,
audio:true,
},function(mediaStream){...

Related

Reverse Engineering HTTP API between iOS device and wifi device on same network

I have a device that connects wirelessly to my network. This deivce has a dedicated iOS app but it hasnt been updated in a while and I worry it is no longer supported/updated and therefore it will eventually stop working.
What I would like to do it try to capture the data between the app and the device and reverse engineer the API so write my own app.
I can see that the device has port 8080 open so I am hoping that it is a HTTP based API but on a non standard port.
Ive looked and both Fiddler and Postman but cant quite seem to work out whether their proxy will capture the data from from port 8080 or just 80/443.
Any thoughts how to do this?
Thanks

Stream live video from Raspberry Pi Camera to Android App

I have multiple Raspberry Pi Devices with the native camera in my home and office (PUBLISHERS). - Publisher(Pi) they are on a local network behind a firewall/router and connected to the internet.
I have an EC2 webserver (BROKER). It is publicly accessible over a public IP Address.
I have an Android App on my phone. It has internet connectivity through a 4G Network. (SUBSCRIBER/CONSUMER/CLIENT)
I am trying to view the live feed of each of the raspberry cameras on my Android app. The problem is more conceptual than technical. I am unable to decide what should be the right approach and most efficient way to achieve this in terms of costs and latency.
Approaches, I have figured out based on my research on this:-
Approach 1:
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Have a code in the pi device that reads the RTSP stream saves it to AWS S3
3. Have a middleware that transcodes the RTSP stream and saves it in a format accessible to mobile app via S3 url
Approach 2:
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Have a code in the pi device that reads the RTSP stream pushes it to a remote frame gathering (ImageZMQ) server. EC2 can be used here.
3. Have a middleware that transcodes the frames to an RTSP stream and saves it in a format on S3 that is accessible to the mobile app via pubicly accessible S3 URL
Approach 3:
1. Stream the camera in WebRTC format by launching a web browser.
2. Send the stream to a media server like Kurento. EC2 can be used here.
3. Generate a unique webrtc pubicly accessible url to each stream
4. Access the webrtc video via mobile app
Approach 4:
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Grab the stream via Amazon Kinesis client installed on the devices.
3. Publish the Kinesis stream to AWS Cloud
4. Have a Lambda store to transcode it and store it in S3
5. Have the mobile app access the video stream via publicly accessible S3 url
Approach 5: - (Fairly complex involving STUN/TURN Servers to bypass NAT)
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Grab the stream and send it a to mediaserver like gstreamer. EC2 can be used here.
3. Use a live555 proxy or ngnix RTMP module. EC2 can be used here.
4. Generate a unique publicly accessible link for each device but running on the same port
5. Have the mobile app access the video stream via the device link
I am open to any video format as long as I am not using any third-party commercial solution like wowza, antmedia, dataplicity, aws kinesis. The most important constraint I have is all my devices are headless and I can only access them via ssh. As such I excluded any such option that involves manual setup or interacting with desktop interface of the PUBLISHERS(Pis). I can create scripts to automate all of this.
End goal is I wish to have public urls for each of Raspberry PI cams but all running on the same socket/port number like this:-
rtsp://cam1-frontdesk.mycompany.com:554/
rtsp://cam2-backoffice.mycompany.com:554/
rtsp://cam3-home.mycompany.com:554/
rtsp://cam4-club.mycompany.com:554/
Basically, with raspvid/ffmpeg you have a simple IP camera. So any architecture applicable in this case would work for you. As example, take a look at this architecture where you install Nimble Streamer on your AWS machine, then process that stream there and get URL for playback (HLS or any other suitable protocol). That URL can be played in any hardware/software player upon your choice and be inserted into any web player as well.
So it's your Approach 3 which HLS instead of WerRTC.
Which solution is appropriate depends mostly on whether you're viewing the video in a native application (e.g. VLC) and what you mean by "live" -- typically, "live streaming" uses HLS, which typically adds at least 5 and often closer to 30 seconds of latency as it downloads and plays sequences of short video files.
If you can tolerate the latency, HLS is the simplest solution.
If you want something real-time (< 0.300 seconds of latency) and are viewing the video via a native app, RTSP is the simplest solution.
If you would like something real-time and would like to view it in the web browser, Broadway.js, Media Source Extensions (MSE), and WebRTC are the three available solutions. Broadway.js is limited to H.264 Baseline, and only performs decently with GPU-accelerated canvas support -- not supported on all browsers. MSE is likewise not supported on all browsers. WebRTC has the best support, but is also the most complex of the three.
For real-time video from a Raspberry Pi that works in any browser, take a look at Alohacam.io (full disclosure: I am the author).

Live streaming audio with WebRTC browser => server

I'm trying to sent some audio stream from my browser to some server(udp, also try websockets).
I'm recording audio stream with webrtc , but I have problems with transmitting data from a nodeJS client to the my server.
Any idea? is it possible to send audio stream to the server using webrtc(openwebrtc)?
To get audio from the browser to the server, you have a few different possibilities.
Web Sockets
Simply send the audio data over a binary web socket to your server. You can use the Web Audio API with a ScriptProcessorNode to capture raw PCM and send it losslessly. Or, you can use the MediaRecorder to record the MediaStream and encode it with a codec like Opus, which you can then stream over the Web Socket.
There is a sample for doing this with video over on Facebook's GitHub repo. Streaming audio only is conceptually the same thing, so you should be able to adapt the example.
HTTP (future)
In the near future, you'll be able to use a WritableStream as the request body with the Fetch API, allowing you to make a normal HTTP PUT with a stream source from a browser. This is essentially the same as what you would do with a Web Socket, just without the Web Socket layer.
WebRTC (data channel)
With a WebRTC connection and the server as a "peer", you can open a data channel and send that exact same PCM or encoded audio that you would have sent over Web Sockets or HTTP.
There's a ton of complexity added to this with no real benefit. Don't use this method.
WebRTC (media streams)
WebRTC calls support direct handling of MediaStreams. You can attach a stream and let the WebRTC stack take care of negotiating a codec, adapting for bandwidth changes, dropping data that doesn't arrive, maintaining synchronization, and negotiating connectivity around restrictive firewall environments. While this makes things easier on the surface, that's a lot of complexity as well. There aren't any packages for Node.js that expose the MediaStreams to you, so you're stuck dealing with other software... none of it as easy to integrate as it could be.
Most folks going this route will execute gstreamer as an RTP server to handle the media component. I'm not convinced this is the best way, but it's the best way I know of at the moment.

How to create a stun, turn and signaling server

Is there a simple guide from where I can start creating a stun / turn and signaling server ?
I spend over a week searching for those things and couldn't find any guide where I can say:
okay, I am on the right track now - this is clear.
So far, everything is so abstract without any examples.
This is what I'm trying to achieve: a simple video stream on my local network where I'll have a server with installed usb camera on it, and an application on my iis which will connect to the usb camera and stream it to the clients, and every time when a client opens the application, will see the video stream from the server camera.
Note: since I want to use it on my local network do I really need a stun/turn server, or is there a guide that shows how to avoid it ?
Media streamed over dedicated servers HTTP/HTTPS rarely needs a NAT traversal solution. Instead, just have your web server with camera attached, on the public Internet or behind your NAT with port-forwarding enabled.
There are LOTS of streaming media solutions available as open source, free downloads, or commercially sold. A good list is here:
https://en.wikipedia.org/wiki/List_of_streaming_media_systems

kurento media server not recording remote audio

I have extended tutorial one to one call for recording.
Original http://doc-kurento.readthedocs.io/en/stable/tutorials.html#webrtc-one-to-one-video-call
Extended https://github.com/gaikwad411/kurento-tutorial-node
Everything is fine but recording the remote audio.
When caller and callee videos are recorded, in the caller video recording callee voice is absent and vica versa.
I have searched kurento docs and mailing lists but did not find solution.
The workarounds I have in mind
1. Use ffmpeg to combine two videos
2. Use composite recording, I will also need to combine remote audio stream.
My questions are
1) Why it is happening, because I can hear the remote audio in ongoing call, but not in recording. In recording I can hear my own voice only.
2) Is there another solution apart from composite recording.
This is perfectly normal behaviour. When you connect a WebRtcEndpoint to a RecorderEndpoint, you only get the media that the endpoint is pushing into the pipeline. As the endpoint is one peer of a WebRTC connection between the browser and the media server, the media that the endpoint pushes into the pipeline is whatever it receives from the browser that has negotiated that WebRTC connection.
The only options that you have, as you have states already, are post-processing or composite mixing.