Question on the structure of RTP Extension headers as explaind in RFC 8285 - webrtc

In RFC 8285, which deals with RTP Header Extensions, the structure for a 1-byte header extension is as shown below (Section 4.2):
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 0xBE | 0xDE | length=3 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| ID | L=0 | data | ID | L=1 | data...
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
...data | 0 (pad) | 0 (pad) | ID | L=3 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| data |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
I understand the OxBEDE which is explained in the RFC. Then comes the "length=3" bits which are followed by the actual extensions. Each extension consists of the ID followed by length. A similar structure is defined for two-byte header extensions.
In both types of headers, I do not understand the "length=3" bits section. Is it just padding used for 32-bit boundary? If so, what purpose does this serve? Ease in parsing? Why not have extension elements started immediately after the xBEDE. Certainly would have been space efficient.
May be I am missing something basic.

This probably dates back to RFC 3550. Specifying the length field explicitly like this allows clients that do not understand extensions to skip them more easily.
Also note that until extended by RFC 5285 (updated by 8285) there could only be a single extension so what you see is a backward compability hack.

Related

Deflate: code lengths of > 7 bits for top-level HCLEN?

RFC 1951 specifies that the first level of encoding in a block contains HCLEN 3-bit values, which encode the lengths of the next level of Huffman codes. Since these are 3-bit values, it follows that no code for the next level can be longer than 7 bits (111 in binary).
However, there seem to be corner cases which (at least with the "classical" algorithm to build Huffman codes, using a priority queue) apparently generate codes of 8 bits, which can of course not be encoded.
An example I came up with is the following (this represents the 19 possible symbols resulting from the RLE encoding, 0-15 plus 16, 17 and 18):
symbol | frequency
-------+----------
0 | 15
1 | 14
2 | 6
3 | 2
4 | 18
5 | 5
6 | 12
7 | 26
8 | 3
9 | 20
10 | 79
11 | 94
12 | 17
13 | 7
14 | 8
15 | 4
16 | 16
17 | 1
18 | 13
According to various online calculators (eg https://people.ok.ubc.ca/ylucet/DS/Huffman.html), and also building the tree by hand, some symbols in the above table (namely 3 and 17) produce 8-bit long Huffman codes. The resulting tree looks ok to me, with 19 leaf nodes and 18 internal nodes.
So, is there a special way to calculate Huffman codes for use in DEFLATE?
Yes. deflate uses length-limited Huffman codes. You need either a modified Huffman algorithm that limits the length, or an algorithm that shortens a Huffman code that has exceeded the length. (zlib does the latter.)
In addition to the code lengths code being limited to seven bits, the literal/length and distance codes are limited to 15 bits. It is not at all uncommon to exceed those limits when applying Huffman's algorithm to sets of frequencies encountered during compression.
Though your example is not a valid or possible set of frequencies for that code. Here is a valid example that results in a 9-bit Huffman code, which would then need to be squashed down to seven bits:
3 0 0 5 5 1 9 31 58 73 59 28 9 1 2 0 6 0 0

bit varying in Postgres to be queried by sub-string pattern

The following Postgres table contains some sample content where the binary data is stored as bit varying (https://www.postgresql.org/docs/10/datatype-bit.html):
ID | Binary data
----------------------
1 | 01110
2 | 0111
3 | 011
4 | 01
5 | 0
6 | 00011
7 | 0001
8 | 000
9 | 00
10 | 0
11 | 110
12 | 11
13 | 1
Q: Is there any query (either native SQL or as Postgres function) to return all rows where the binary data field is equal to all sub-strings of the target bit array. To make it more clear lets look at the example search value 01101:
01101 -> no result
0110 -> no result
011 -> 3
01 -> 4
0 -> 5, 10
The result returned should contain the rows: 3, 4, 5 and 10.
Edit:
The working query is (thanks to Laurenz Albe):
SELECT * FROM table WHERE '01101' LIKE (table.binary_data::text || '%')
Furthermore I found this discussion about Postgres bit with fixed size vs bit varying helpful:
PostgreSQL Bitwise operators with bit varying "cannot AND bit strings of different sizes"
How about
WHERE '01101' LIKE (col2::text || '%')
I think you are looking for bitwise and:
where col2 & B'01101' = col2

What is the meaning of the "Load" column in Apache balancer-manager?

I've set up the Apache (2.4) load-balancer which is working okay. To monitor its performance, I enabled the balancer-manager handler, which shows the status of the balancers.
I noticed a "Load" column, which was not present in version 2.2, with a value that may be negative, but I don't understand its meaning nor I was able to find documentation relative to this.
Can anyone explain the meaning of that value or point me to the right documentation?
I now understood, how the calculation of "Load" works. Here is a I think more simpler example than on the apache documents page.
Let's say we have 3 worker and a configured load factor of 1.
1) Start
a | b | c
--+---+---
0 | 0 | 0
add the load factor of 1 to all workers
a | b | c
--+---+---
1 | 1 | 1
now select the one with highest value --> a and decrease by the sum of the factor of all (=3) - this is the selected worker
a | b | c
---+---+---
-2 | 1 | 1
2) next round, add again 1 to all
a | b | c
---+---+---
-1 | 2 | 2
now select the one with highest value --> b and decrease by the sum of the factor of all (=3) - this is the selected worker
a | b | c
---+----+----
-1 | -1 | 2
3) next round, add again 1
a | b | c
---+----+----
0 | 0 | 3
now select the one with highest value --> c and decrease by the sum of the factor of all (=3) - this is the selected worker
a | b | c
---+----+----
0 | 0 | 0
startover again :)
I hope this helps others.
The Load value is populated by lbstatus based on this line of code:
ap_rprintf(r, "<td>%d</td><td>", worker->s->lbstatus);
in https://svn.apache.org/viewvc/httpd/httpd/trunk/modules/proxy/mod_proxy_balancer.c?view=markup#l1767 (line might changed when the code modified)
Since your method is by request, lbstatus is specified by mod_lbmethod_byrequests which define:
lbstatus is how urgent this worker has to work to fulfill its quota of
work.
Details on the algorithm can be found here: https://httpd.apache.org/docs/2.4/mod/mod_lbmethod_byrequests.html
i too want to know to description for others column like BUSY, ELECTED etc.. my LB has BUSY over 100 already.. i though BUSY should not exceed 100 ( as in 100% server busyness or something )

Pairwise testing: How to create the table?

Hello I have doubt regarding how to create the table for the pairwise testing.
For example if I have three parameter which can each attain two different values. How do I create a table of input with all possible combination then? Would it look something like this?
| 1 2 3
-----------
1 | 1 1 1
2 | 1 2 2
3 | 1 1 2
4 | 1 2 1
Does each parameter corresponds to each column?
However since I have 3 parameter, which each can take 2 different value. The number of test cases should be 2^3 isn't it?
There's a good article with links to some useful tools here:
http://blog.josephwilk.net/ruby/pairwise-testing-with-cucumber.html
For the parameters: each column is a parameter, and each row is a possible combination. Here is the table:
| 1 2 3
-----------
1 | 1 1 1
2 | 2 1 1
3 | 1 2 1
4 | 1 1 2
5 | 2 2 1
6 | 2 1 2
7 | 1 2 2
8 | 2 2 2
so 2^3=8 possible combinations as you can see :)
For the values: each column is a value, and each row is a possible combination:
| 1 2
--------
1 | 1 1
2 | 2 1
3 | 1 2
4 | 2 2
They are 2^2=4 possible combinations. Hope it helps.
1) Please note that pair-wise testing is not about scanning exhaustively all possible combination of values of all parameters. Firstly, such a scanning would give you an enormous amount of test cases that almost no existing system could be able to run all of them.
Secondly, pair-wise testing for a software system is based on the hope that the two parameters having the highest number of possible values are the culprit for the highest percentage of faults of that system.
This is of course only a hope and almost no rigorous scientific research has existed so far to prove that.
2) What I often see in the documentations discussing pair wise testing, like this is that the list of all possible values (aka the pair-wise test table) is not constructed in a well thought way. This creates confusions.
In your case, all the parameters have the same number of possible values (2 values), therefore you could choose any two parameters of those three to build the table. What you could pay attention is the ordering of the combination: you iterate first the top-right parameter, then the next parameter to the left, and so on, ...
Say if you have two parameters p1 and p2, p1 has two possible values apple and orange; and p2 has two possible values red and blue, then your pair-wise test table would be:
index| p1 p2
------------------
1 | apple red
2 | apple blue
3 | orange red
4 | orange blue

How to represent and insert into an ordered list in SQL?

I want to represent the list "hi", "hello", "goodbye", "good day", "howdy" (with that order), in a SQL table:
pk | i | val
------------
1 | 0 | hi
0 | 2 | hello
2 | 3 | goodbye
3 | 4 | good day
5 | 6 | howdy
'pk' is the primary key column. Disregard its values.
'i' is the "index" that defines that order of the values in the 'val' column. It is only used to establish the order and the values are otherwise unimportant.
The problem I'm having is with inserting values into the list while maintaining the order. For example, if I want to insert "hey" and I want it to appear between "hello" and "goodbye", then I have to shift the 'i' values of "goodbye" and "good day" (but preferably not "howdy") to make room for the new entry.
So, is there a standard SQL pattern to do the shift operation, but only shift the elements that are necessary? (Note that a simple "UPDATE table SET i=i+1 WHERE i>=3" doesn't work, because it violates the uniqueness constraint on 'i', and also it updates the "howdy" row unnecessarily.)
Or, is there a better way to represent the ordered list? I suppose you could make 'i' a floating point value and choose values between, but then you have to have a separate rebalancing operation when no such value exists.
Or, is there some standard algorithm for generating string values between arbitrary other strings, if I were to make 'i' a varchar?
Or should I just represent it as a linked list? I was avoiding that because I'd like to also be able to do a SELECT .. ORDER BY to get all the elements in order.
As i read your post, I kept thinking 'linked list'
and at the end, I still think that's the way to go.
If you are using Oracle, and the linked list is a separate table (or even the same table with a self referencing id - which i would avoid) then you can use a CONNECT BY query and the pseudo-column LEVEL to determine sort order.
You can easily achieve this by using a cascading trigger that updates any 'index' entry equal to the new one on the insert/update operation to the index value +1. This will cascade through all rows until the first gap stops the cascade - see the second example in this blog entry for a PostgreSQL implementation.
This approach should work independent of the RDBMS used, provided it offers support for triggers to fire before an update/insert. It basically does what you'd do if you implemented your desired behavior in code (increase all following index values until you encounter a gap), but in a simpler and more effective way.
Alternatively, if you can live with a restriction to SQL Server, check the hierarchyid type. While mainly geared at defining nested hierarchies, you can use it for flat ordering as well. It somewhat resembles your approach using floats, as it allows insertion between two positions by assigning fractional values, thus avoiding the need to update other entries.
If you don't use numbers, but Strings, you may have a table:
pk | i | val
------------
1 | a0 | hi
0 | a2 | hello
2 | a3 | goodbye
3 | b | good day
5 | b1 | howdy
You may insert a4 between a3 and b, a21 between a2 and a3, a1 between a0 and a2 and so on. You would need a clever function, to generate an i for new value v between p and n, and the index can become longer and longer, or you need a big rebalancing from time to time.
Another approach could be, to implement a (double-)linked-list in the table, where you don't save indexes, but links to previous and next, which would mean, that you normally have to update 1-2 elements:
pk | prev | val
------------
1 | 0 | hi
0 | 1 | hello
2 | 0 | goodbye
3 | 2 | good day
5 | 3 | howdy
hey between hello & goodbye:
hey get's pk 6,
pk | prev | val
------------
1 | 0 | hi
0 | 1 | hello
6 | 0 | hi <- ins
2 | 6 | goodbye <- upd
3 | 2 | good day
5 | 3 | howdy
the previous element would be hello with pk=0, and goodbye, which linked to hello by now has to link to hey in future.
But I don't know, if it is possible to find a 'order by' mechanism for many db-implementations.
Since I had a similar problem, here is a very simple solution:
Make your i column floats, but insert integer values for the initial data:
pk | i | val
------------
1 | 0.0 | hi
0 | 2.0 | hello
2 | 3.0 | goodbye
3 | 4.0 | good day
5 | 6.0 | howdy
Then, if you want to insert something in between, just compute a float value in the middle between the two surrounding values:
pk | i | val
------------
1 | 0.0 | hi
0 | 2.0 | hello
2 | 3.0 | goodbye
3 | 4.0 | good day
5 | 6.0 | howdy
6 | 2.5 | hey
This way the number of inserts between the same two values is limited to the resolution of float values but for almost all cases that should be more than sufficient.