Is it possible in HTTP/2 that a data frame is received with END_STREAM flag, and later a trailers frame will be received? In other words, will a frame with END_STREAM also indicate that no other frame will be sent?
Also, must a trailers frame (Which is practically headers frame) set END_STREAM?
Is it possible in HTTP/2 that a data frame is received with END_STREAM flag, and later a trailers frame will be received?
No. When a frame with END_STREAM is received, the stream enters the "half-closed (remote)" state.
This is specified in section 5.1 of RFC 7540, where it says:
half-closed (remote):
A stream that is "half-closed (remote)" is no longer being used by
the peer to send frames. In this state, an endpoint is no longer
obligated to maintain a receiver flow-control window.
If an endpoint receives additional frames, other than
WINDOW_UPDATE, PRIORITY, or RST_STREAM, for a stream that is in
this state, it MUST respond with a stream error (Section 5.4.2) of
type STREAM_CLOSED.
Related
I just finished reading sections 1-6.2 of the OpenFlow specification here.
Section 6.1.2 says:
Packet-in events can be configured to buffer packets. For packet-in generated by an output action in
a flow entries or group bucket, it can be specified individually in the output action itself (see 7.2.6.1),
for other packet-in it can be configured in the switch configuration (see 7.3.2). If the packet-in event is
configured to buffer packets and the switch has sufficient memory to buffer them, the packet-in event
contains only some fraction of the packet header and a buffer ID to be used by a controller when it
is ready for the switch to forward the packet. Switches that do not support internal buffering, are
configured to not buffer packets for the packet-in event, or have run out of internal buffering, must
send the full packet to controllers as part of the event. Buffered packets will usually be processed via a
Packet-out or Flow-mod message from a controller, or automatically expired after some time
This makes it sound like for every packet that hits the OpenFlow switch, an asynchronous message must be sent to the controller to make a forwarding decision. However Chapter 5 makes it sound like a switch has a set of OpenFlow flows and at the end of that generates an action set which determines what should be done with a packet and the packet is only forwarded to the controller when there is a flow table miss.
Under what conditions is a packet sent to the controller for a decision? Is it always? Or is it only circumstantial?
Packets will be sent to the OpenFlow controller any time the out port is set to be the controller.
PACKET_IN events occur when a flow wasn't matched on the switch and are then sent to the controller. Otherwise no event is created - the switch simply forwards the packet according to the flow rules and the controller is none the wiser.
Say I have an USB device, a camera for instance, and I would like to load the image sequence captured by the camera to the host, using libusb API.
It is not clear to me for the following points:
How is the IN Endpoint on the device populated? Is it always the full image data of one frame (and optionally plus some status data)?
libusb_bulk_transfer() has a parameter length to specify how long is the data the host wants to read IN, and another parameter transferred indicating how much data actually had been transferred. The question is: should I always request the same amount of data that the IN Endpoint would send? If so, then what would be the case where transferred was smaller than length?
How is it determined how much data would be sent by the In Endpoint upon each transfer request?
When two peers are using WebRTC transmission with TURN as a relay server we've noticed that from time to time the data inside Send Indication or Channel Data is actually a valid STUN Binding Request message (type 0x0001). The other peer responds in the same way with a valid Binding Request Response (type 0x0101). It happens repeatedly during the whole conversation. Both peers are forced to use TURN server. What is the purpose of encapsulating typical STUN message inside data attribute of TURN transmission frame? Is it described in any document?
Here is an example of Channel Data frame:
[0x40,0x00,0x00,0x70,0x00,0x01,0x00,0x5c,0x21,0x12,0xa4,0x42,0x71,0x75,0x6d,0x6a,0x6f,0x66,0x69,0x6f...]
0x40,0x00 - channel number
0x00,0x70 - length of data
0x00,0x01,0x00,0x5c,0x21,0x12... - data, that can be parsed to a Binding Request
This is ICE (described in RFC 5245) connectivity checks running via TURN as well as consent checks described in RFC 7675.
SRTCP tracks the number of sent and lost bytes and packets, last received sequence number, inter-arrival jitter for each SRTP packet, and other SRTP statistics.
Does mentioned browsers do something with SRTCP reports when dealing with audio stream, for example adjust bitrate on the fly if network conditions are changed ?
Given that Chrome does adjust bitrate and resolution of VP8 on the fly in a connection, I would assume that OPUS configurations are changed in the connection as well.
You can see the feed back on the sending audio in this image. The bitrate obviously drops slightly when using opus. However, I would imagine that video bitrate would be the first changed in a video call as changing it would have the greater effect.
Obviously, one cannot change the bitrate on a codec that only supports constant bitrates.
All the other stats are a combination of what the RTCP reports give(packetsLost, Rtt, bits sent, etc.) and googles monitoring of the inputs/outputs(audio level, echo cancellation, etc.).
NOTE: this is taken from a session created by AppRtc in chrome.
In my RTSP server, i need to know what is the current fps of the stream from Axis Camera every second.
is there any specific RTSP Command through which i can request camera to send FPS information to RTSP server..??
Thanks,
Prateek
The only official way in RTSP to inform a receiver about the frame rate is inside the SDP of the DESCRIBE response.
Either directly via a=framerate:<frame rate> which gives by definition only the maximum frame rate. Or inside the configuration information of your stream which shall also sent via SDP in a=rtpmap:<payload type> <encoding name>/<clock rate> [/<encoding parameters>] or regularly inside the stream.
A better way is to compute the frame rate on your receiver side by using the timestamp of every incoming frame.
Most newer AXIS-devices (those using H.264) using the absolute Timestamp of the camera (check the camera setup!) The firmware of the older devices is buggy and you can not rely on the timestamp sent by the camera - only on the time-difference of two frames are exact.
jens.