RTSP Command to Request Camera for Current FPS - camera

In my RTSP server, i need to know what is the current fps of the stream from Axis Camera every second.
is there any specific RTSP Command through which i can request camera to send FPS information to RTSP server..??
Thanks,
Prateek

The only official way in RTSP to inform a receiver about the frame rate is inside the SDP of the DESCRIBE response.
Either directly via a=framerate:<frame rate> which gives by definition only the maximum frame rate. Or inside the configuration information of your stream which shall also sent via SDP in a=rtpmap:<payload type> <encoding name>/<clock rate> [/<encoding parameters>] or regularly inside the stream.
A better way is to compute the frame rate on your receiver side by using the timestamp of every incoming frame.
Most newer AXIS-devices (those using H.264) using the absolute Timestamp of the camera (check the camera setup!) The firmware of the older devices is buggy and you can not rely on the timestamp sent by the camera - only on the time-difference of two frames are exact.
jens.

Related

WebRTC - receiving H264 key frames

I've been playing with WebRTC using libdatachannel, experimenting and learning.
Wrote some code to parse RTP packets into NALU's, and testing connecting to a "known good" server which sends H264 video.
Problem:
I'm only seeing NALU's with type = 1 (fragmented into multiple FU-A's) and sometimes type = 24 (which contain embedded SPS and PPS NALU's).
So I don't understand how to decode / render this stream - I would expect the server to send a NALU with a key frame (NALU type 5) automatically to a newly connected client, but it does not.
What am I missing to be able to decode the stream? What should I do to receive a key frame quickly? If my understanding is correct, I need a key frame to start decoding / rendering.
Tried requesting a key frame from code - it does arrive (type 5) but after some delay which is undesirable.
And yet the stream plays perfectly fine with a web browser client (Chrome, JavaScript) and starts up quickly.
Am I maybe overthinking this, and the browser also has a delay but I'm just perceiving it as instant?
In any case, what's the situation with key frames? Is a client supposed to request them (and without that, a server should not be expected to send them)?
If so what's a good interval? One second, two, three?

How does audio and video in a webrtc peerconnection stay in sync?

How does audio and video in a webrtc peerconnection stay in sync? I am using an API which publishes audio and video (I assume as one peer connection) to a media server. The audio can occasionally go out of sync up to 200ms. I am attributing this to the possibility that the audio and video are separate streams and this accounts for the why the sync can be out.
In addition to Sean's answer:
WebRTC player in browsers has a very low tolerance for timestamp difference between arriving audio and video samples. Your audio and video streams must be aligned (interleaved) precisely, i.e. the timestamp of last audio sample received from network, should be +- 200ms or so comparing to timestamp of last video frame received from network. Otherwise WebRTC player will stop using NTP Timestamps and will play streams individually. This is because WebRTC player tries to keep latency at a minimum. Not sure it's good decision from WebRTC team. If your bandwidth is not sufficient, or if live encoder provides streams not timestamp-aligned - then you will have out of sync playback. In my opinion, WebRTC player could have a setting - whether to use that tolerance value or always play in sync, using NTP Timestamps, at the expense of latency.
RTP/RTCP (which WebRTC uses) traditionally uses the RTCP Sender Report. That allows each SSRC stream to be synced on a NTP Timestamp. Browsers do use them today, so things should work.
Are you doing any protocol bridging or anything that could be RTP only? What Media Server are you using?

How to display MJPEG stream transmitted via UDP in a Mac OS-X application

I have a camera that sends mjpeg frames as UDP packets over wifi that I would like to display in my max os-x application. My application is written in objective-c and I am trying to use the AVFoundation classes to display the live stream. The camera is controlled using http get & post requests.
I would like the camera to be recognized as a AVCaptureDevice as I can easily display streams from different AVCaptureDevices. Since the stream is over wifi, it isn't recognized as a AVCaptureDevice.
Is there a way I can create my own AVCaptureDevice that I can use to control this camera and display the video stream?
After much research into the packets sent from the camera, I have concluded that it does not communicate in any standard protocol such as RTP. What I ended up doing is reverse-engineering the packets to learn more about their contents.
I confirmed it does send jpeg images over UDP packets. It takes multiple UDP packets to send a single jpeg. I listened on the UDP port for packets and assemble them into a single image frame. After I have a frame, I then created an NSImage from it and displayed it in an NSImageView. Works quite nicely.
If anyone is interested, the camera is an Olympus TG-4. I am writing the components to control settings, shutter, etc.

WebRTC video and photo at same time

I'm working on an application that transmits video in low quality using webrtc. Periodically I want to send from same camera single frame in high resolution.
When I try to acquire another stream using getUserMedia I get same low quality one and when I try to pass some constraints to force higher resolution then then operation fails with overconstrained error (even though normally when there is no other stream it works fine).
Is it even possible to have at the same time many streams with different parameters from same device? Or is it possible acquire high resolution image without requesting for a new stream?

How Chrome/Firefox handles SRTCP report comming from WebRtc connection?

SRTCP tracks the number of sent and lost bytes and packets, last received sequence number, inter-arrival jitter for each SRTP packet, and other SRTP statistics.
Does mentioned browsers do something with SRTCP reports when dealing with audio stream, for example adjust bitrate on the fly if network conditions are changed ?
Given that Chrome does adjust bitrate and resolution of VP8 on the fly in a connection, I would assume that OPUS configurations are changed in the connection as well.
You can see the feed back on the sending audio in this image. The bitrate obviously drops slightly when using opus. However, I would imagine that video bitrate would be the first changed in a video call as changing it would have the greater effect.
Obviously, one cannot change the bitrate on a codec that only supports constant bitrates.
All the other stats are a combination of what the RTCP reports give(packetsLost, Rtt, bits sent, etc.) and googles monitoring of the inputs/outputs(audio level, echo cancellation, etc.).
NOTE: this is taken from a session created by AppRtc in chrome.