How to reverse engineer video stream from wifi camera - camera

How can I reverse engineer the video stream URL by looking at the network interface/traffic?
I have a Panasonic Lumix DMC-FT6, which has a "wifi mode". I have the iOS Panasonic Image App. I can setup a HTTP Proxy between the two (using Charles) and see the communication. I can see some "UDP 17" traffic between the iPhone by watching the interface via the method How do you monitor network traffic on the iPhone? using Wireshark.
My end goal is to enter a stream URL in VLC and see the video stream.
Unfortunately there is no open source information on this that I can see.
What is the protocol most likely to be (for a consumer camera)?
What traits can I look for?
Are there any obvious URLs worth trying?

Related

Stream live video from Raspberry Pi Camera to Android App

I have multiple Raspberry Pi Devices with the native camera in my home and office (PUBLISHERS). - Publisher(Pi) they are on a local network behind a firewall/router and connected to the internet.
I have an EC2 webserver (BROKER). It is publicly accessible over a public IP Address.
I have an Android App on my phone. It has internet connectivity through a 4G Network. (SUBSCRIBER/CONSUMER/CLIENT)
I am trying to view the live feed of each of the raspberry cameras on my Android app. The problem is more conceptual than technical. I am unable to decide what should be the right approach and most efficient way to achieve this in terms of costs and latency.
Approaches, I have figured out based on my research on this:-
Approach 1:
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Have a code in the pi device that reads the RTSP stream saves it to AWS S3
3. Have a middleware that transcodes the RTSP stream and saves it in a format accessible to mobile app via S3 url
Approach 2:
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Have a code in the pi device that reads the RTSP stream pushes it to a remote frame gathering (ImageZMQ) server. EC2 can be used here.
3. Have a middleware that transcodes the frames to an RTSP stream and saves it in a format on S3 that is accessible to the mobile app via pubicly accessible S3 URL
Approach 3:
1. Stream the camera in WebRTC format by launching a web browser.
2. Send the stream to a media server like Kurento. EC2 can be used here.
3. Generate a unique webrtc pubicly accessible url to each stream
4. Access the webrtc video via mobile app
Approach 4:
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Grab the stream via Amazon Kinesis client installed on the devices.
3. Publish the Kinesis stream to AWS Cloud
4. Have a Lambda store to transcode it and store it in S3
5. Have the mobile app access the video stream via publicly accessible S3 url
Approach 5: - (Fairly complex involving STUN/TURN Servers to bypass NAT)
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Grab the stream and send it a to mediaserver like gstreamer. EC2 can be used here.
3. Use a live555 proxy or ngnix RTMP module. EC2 can be used here.
4. Generate a unique publicly accessible link for each device but running on the same port
5. Have the mobile app access the video stream via the device link
I am open to any video format as long as I am not using any third-party commercial solution like wowza, antmedia, dataplicity, aws kinesis. The most important constraint I have is all my devices are headless and I can only access them via ssh. As such I excluded any such option that involves manual setup or interacting with desktop interface of the PUBLISHERS(Pis). I can create scripts to automate all of this.
End goal is I wish to have public urls for each of Raspberry PI cams but all running on the same socket/port number like this:-
rtsp://cam1-frontdesk.mycompany.com:554/
rtsp://cam2-backoffice.mycompany.com:554/
rtsp://cam3-home.mycompany.com:554/
rtsp://cam4-club.mycompany.com:554/
Basically, with raspvid/ffmpeg you have a simple IP camera. So any architecture applicable in this case would work for you. As example, take a look at this architecture where you install Nimble Streamer on your AWS machine, then process that stream there and get URL for playback (HLS or any other suitable protocol). That URL can be played in any hardware/software player upon your choice and be inserted into any web player as well.
So it's your Approach 3 which HLS instead of WerRTC.
Which solution is appropriate depends mostly on whether you're viewing the video in a native application (e.g. VLC) and what you mean by "live" -- typically, "live streaming" uses HLS, which typically adds at least 5 and often closer to 30 seconds of latency as it downloads and plays sequences of short video files.
If you can tolerate the latency, HLS is the simplest solution.
If you want something real-time (< 0.300 seconds of latency) and are viewing the video via a native app, RTSP is the simplest solution.
If you would like something real-time and would like to view it in the web browser, Broadway.js, Media Source Extensions (MSE), and WebRTC are the three available solutions. Broadway.js is limited to H.264 Baseline, and only performs decently with GPU-accelerated canvas support -- not supported on all browsers. MSE is likewise not supported on all browsers. WebRTC has the best support, but is also the most complex of the three.
For real-time video from a Raspberry Pi that works in any browser, take a look at Alohacam.io (full disclosure: I am the author).

Streaming webcam and mic inputs through browser

Short version:
I need an in-browser solution to deliver the webcam and mic streams to a server.
Long version:
I'm trying to create a live streaming application. So far I've only managed to figure out this workflow:
Client creates stream (some transcoder is probably required here)
Client sends(publishes?) stream to server (basically hosts an RTMP/other stream that should be accessible by my server)
Server transcodes, transrates, etc. and publishes the stream to a CDN
Viewers watch published stream
Ideally, I'd like a browser-based solution that requires minimal setup from the client's end (a Flash plugin download might be acceptable) and streams the webcam and mic inputs to the server. I'm either unaware of the precise keywords or am looking for the wrong thing, but I can't find an apt solution.
Solutions that involve using ffmpeg or vlc to publish a stream aren't really what I'm looking for, since they require additional download and setup, and aren't restricted to just webcam and mic inputs. WebRTC probably won't serve the same quality but if all else fails, I think it can get the job done, at least for some browsers.
I'm using Ubuntu for development and have just activated a trial license for Wowza streaming server and cloud.
Is ffmpeg/vlc et. al. the only way out? Or is there something that can do the job in a single browser tab?
If you go the RTMP way, Adobe Flash Player supports H.264 encoding directly. Since you mentioned Wowza you can find an example and complete source code (including the fla) in the examples directory. There's also a demo here. There are many other open-source Flash capture plugins.
You can also use the aforementioned Flash recorder without Wowza. In this case you'll need a RTMP server, a notable example being the Nginx RTMP module which supports recording (to flv) and also offers callbacks that allow you to launch the transcoding once the recording is done.
With WebRTC you can record (getUserMedia, MediaStreamRecorder) small media chunks and send them to the server where they will get concatenated or using the peer-to-peer communications features of WebRTC (RTCPeerConnection). For a detailed overview see my answer here.
In both cases you'll have issues with devices/browsers that don't support Flash or WebRTC, eg. iPhones, Safari. Plus getUserMedia doesn't capture the same format across all browsers: Firefox audio/video in WebM and Chrome audio in wav and video in WebM.
For mobile devices you'll probably have to write apps.

Get a stream of a remote camera

I need to start a live stream in a remote computer connected to a webcam,
then connect to that remote ip address and see the live stream, like a security webcam more or less.
On my client i want to be able to see the stream in my browser.
What I've tried so far:
VLC on the remote pc: I start the stream (MMS, HTTP or RSTP) and then I encapsulate the stream as object in a html page.
This works, but I have a high latency and not all the browsers support x-vlc-plugin.
WebRTC. This seemed to me the best solution. Direct stream, very low latency.
I tried all the solutions I found in internet, that also integrate node.js. I tried also to build some code myself but the problem is that:
I start the stream on the "server", the remote pc.
When i go to the client, I type in the browser the ip address and port of the remote PC. In theory I should be able to see the REMOTE stream, but instead the browser asks for permission to use my LOCAL camera!
Do you have some hints or solutions about? What am I doing wrong?
Last project I tried:
https://github.com/xat/webcam-binaryjs-demo
In this project:
https://webrtc.github.io/samples/src/content/peerconnection/multiple-relay/
the developer uses a relay of the stream.
Buttons work but I don't know how to use this, that is how to catch the relay and display it on the client.
Thank you for your suggestions.
webRTC has three common API
getUserMedia : for communication and streaming between camera/mic with browser (request permission for access to camera/mic)
https://developer.mozilla.org/en-US/docs/Web/API/Navigator/getUserMedia
RTCDataChannel : data channel for send/receive any type of data on connection
https://developer.mozilla.org/en-US/docs/Web/API/RTCDataChannel
RTCPeerConnection : for creating peer-to-peer connection
https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection
you don't need getUserMedia
find getUserMedia() , this method send access request for camera and microphone to user , you can set both boolean false , or remove it carefully
navigator.getUserMedia({
video:false,
audio:true,
},function(mediaStream){...

HLS(HttpLiveStreaming) vs RTP(Real-time Transport Protocol) on UDP for mobile P2P?

I'm testing Audio/Video P2P connection between mobile devices.
Studying WebRTC, I've noticed NAT traversal(uses STUN server) and UDP-hole-punching is the key to make P2P possible.
On the other hand, I've noticed HLS(HttpLiveStreaming) on iOS devices is very optimized for A/V live streaming, and widely available even with Android4.x(3.x unstable)
So, here is my question if I use HLS for mobile P2P:
a) HLS is a protocol on TCP(HTTP) not UDP, so isn't there a performance drawback?
See: TCP vs UDP on video stream
b) How about NAT traversal? Will it be easier since HLS is HTTP(port:80)?
I have read wikipedia http://en.wikipedia.org/wiki/HTTP_Live_Streaming
Since its requests use only standard HTTP transactions, HTTP Live
Streaming is capable of traversing any firewall or proxy server that
lets through standard HTTP traffic, unlike UDP-based protocols such as
RTP. This also allows content to be delivered over widely available
CDNs.
c) How about android device compatibility? Is there a lot of problems to invoke StreamingLive distribution?
Thanks.
The reason why firewalls are not an issue for HLS is that it's a client-server protocol where all requests are done via HTTP on port 80. If you are implementing a P2P application, you won't be able to attach it to a port below 1024 unless you have root privileges.
This means that exchanging data via HLS (port 80) won't work for P2P. Unless you have a translation server in the middle, which defeats the purpose of P2P.
Comparing HTTP Live Streaming to P2P video streaming over UDP/RTP is almost like comparing apples and oranges. More like oranges and tangerines... read on.
HTTP Live Streaming was designed as client-server protocol without P2P or NAT traversal consideration. The idea being that the streaming server is already over HTTP/TCP and accessible from the public internet as if it was just an ordinary web server. The key features of HLS is its ability to dynamically switch the bitrate based on how well the client receives the stream. If the client connection to the server hiccups trying to stream down a 1080p video, it can transparently switch to sending a lower bitrate video (and likely switch back to streaming at higher bitrate if network conditions improve). Good example: Netflix.
WebRTC and ICE were designed to stream real time video bidirectionally between devices that might both behind NATs. As such, traversing a NAT through UDP is much easier than TCP. UDP lends itself to real-time (less latency) than TCP. Most video-chat clients (ala Skype) have dynamic bandwidth adjustments built in to their codecs and protocols to achieve something similar to what HLS does.
I suppose you could combine TCP NAT traveral and HLS together. Doing HLS over UDP implies that you build a TCP like reliability layer on top of your UDP stream.
Hope this helps
http://www.garymcgath.com/streamingprotocols.html
HTTP Live Streaming
The new trend in streaming is the use of HTTP with protocols that
support adaptive bitrates. This is theoretically a bad fit, as HTTP
with TCP/IP is designed for reliable delivery rather than keeping up a
steady flow, but with the prevalence of high-speed connections these
days it doesn't matter so much. Apple's entry is HTTP Live Streaming,
aka HLS or Cupertino streaming. It was developed by Apple for iOS and
isn't widely supported outside of Apple's products. Long Tail Video
provides a testing page to determine whether a browser supports HLS.
Its specification is available as an Internet Draft. The draft
contains proprietary material, and publishing derivative works is
prohibited.
The only playlist format allowed is M3U Extended (.m3u or .m3u8), but the format of the streams is restricted only by the implementation.
I could achieve P2P on top of HLS using WebRTC on a Android with a Mozilla Firefox browser and two others desktop browsers (Chrome and Firefox) on the same swarm.
Here's a screenshot of a presentation that I've made on the University: https://www.dropbox.com/s/zyfgs4o8al9ovd0/Screenshot%202014-07-17%2019.58.15.png
This screenshot was made by acessing http://bem.tv/demo.html.
If you want to know more about, this is my masters project and I'm publishing my advances on http://bem.tv and http://github.com/bemtv.

Real time live streaming with an iPhone for robotics

For a research purpose, I developed and app to control a wheeled mobile robot using the gyro and the accelerometer of an iPhone. The robot has a IP address, and I control it by sending messages through a socket. Since the robot has to be controlled from anywhere in the world, I mounted a camera on top of it. I tried to stream the video from the camera using the http live streaming protocol and vlc, but the latency is too high (15-30sec) to properly control it.
Now, vlc has the possibility to stream over udp or http, but the point is how do I decode the stream on the iPhone? How should I treat the data coming into the socket in order to visualize them as a continuous live video?