How can I test the equality of two MediaStreams? - webrtc

I'm wondering if there is a way to determine if two MediaStreams are equal.
What do you mean by "equal"?
I'd like to determine if the two streams are using the same hardware sources (Same microphone and camera are being used).
Acquiring streamB with the exact same constraints as streamA would mean they are equal.
Here is what I've tried so far:
comparing via the MediaStream id e.g.: streamA.id == streamB.id
This falls away since according to the spec:
When a MediaStream object is created, the User Agent must generate an identifier string, and must initialize the object's id attribute to that string. A good practice is to use a UUID [rfc4122], which is 36 characters long in its canonical form. To avoid fingerprinting, implementations should use the forms in section 4.4 or 4.5 of RFC 4122 when generating UUIDs.
Compare the id's of the MediaStreamTracks - same story, a UUID is generated per track.
Compare the tracks labels, which in the current Chrome contain names/identifiers of the hardware. This is very close to what I'm looking for, however (emphasis mine):
User Agents may label audio and video sources (e.g., "Internal microphone" or "External USB Webcam"). The label attribute must return the label of the object's corresponding source, if any. If the corresponding source has or had no label, the attribute must instead return the empty string
Is there a different approach I could take? Should I never end up in a situation where I compare two media streams? Would you say I can trust the label attribute?
Thanks for your time.

groupId together with kind is probably the closest thing you will get. Until you get multiple mics/cams on the same device...

Related

Checking to see if an image format supports a usage in Vulkan?

If I want to see what an image format can be used for I can do the vkGetPhysicalDeviceImageFormatProperties2() and set the usage flag for the image format. I've noticed if the format isn't supported for those usages and settings the structure I pass in is set to all zero, and I can know if the format supports those uses. So if I want to know if VK_FORMAT_R8G8B8_UINT supports sampling from a shader I set the VK_IMAGE_USAGE_SAMPLED_BIT in the usage flags and call that function.
What I wanted to know is if that's equivalent to calling another function, called vkGetPhysicalDeviceFormatProperties2(), exactly the same name but without 'image' in the name, give that function the format, and check whether the VK_IMAGE_USAGE_SAMPLED_BIT is set.
So using the first method I give the format and usages I want from it, and then check if the values returned are zero max width, max height, etc, meaning those usages aren't supported, versus the second method of passing the format, getting back the flags and then checking the flags.
Are these two methods equivalent?
TL;DR: Do your image format checking properly: ask how you can use the format, then ask what functionality is available from usable format&usage combinations.
If you call vkGetPhysicalDeviceImageFormatProperties2 with usage flags and the like that don't correspond to a supported image type, you get an error: VK_ERROR_FORMAT_NOT_SUPPORTED. It inherits this due to the fact that it is said to "behave similarly to vkGetPhysicalDeviceImageFormatProperties", which has an explicit statement about this error:
If format is not a supported image format, or if the combination of format, type, tiling, usage, and flags is not supported for images, then vkGetPhysicalDeviceImageFormatProperties returns VK_ERROR_FORMAT_NOT_SUPPORTED.
Now normally, a function which gives rise to an error will yield undefined values in any return values. But there is a weird exception:
If the combination of parameters to vkGetPhysicalDeviceImageFormatProperties2 is not supported by the implementation for use in vkCreateImage, then all members of imageFormatProperties will be filled with zero.
However, there's an explicit note saying that this was old, bad behavior and is only preserved for compatibility's sake. Being a compatibility feature means that you can rely on it, but you shouldn't. Also, it only applies to the imageFormatProperties data and not any of the extension structures you can pass.
So it's best to just ignore this and ask your questions in the right order.

How do PDF readers validate form fields?

I was looking at the source code of several pdf files which were digitally signed (and had annotations and form fields as well).
I noticed that each "Annot" dictionary has a "M" value which stores the latest time it was modified - which can then be checked with the "M" value for the "Sig" dictionary which stores when the pdf file was digitally signed.
However, I noticed that dictionaries with type "XObject" and subtype "Form" do not have an "M" value - i.e. do not store the time at which said form was modified. In such cases, how do pdf readers validate whether a change to form field is allowed (for eg, in a digital sign where no changes are allowed, no form field can be changed after the digital sign is done - how is this verified?)?
I just attached an example pdf at this link:
https://www.mediafire.com/file/q8ed9rkf35kgxgq/output.txt/file
Some Misconceptions
There apparently are a number of misconceptions to clear up here.
I noticed that each "Annot" dictionary has a "M" value which stores the latest time it was modified - which can then be checked with the "M" value for the "Sig" dictionary which stores when the pdf file was digitally signed.
The M entry of annotation dictionaries is optional, so you cannot count on it being there.
The format of the annotation M value essentially only is a String; it is merely recommended to contain dates as specified in the PDF specification but not required, so you might find a value like "my mother's 42nd birthday" in it.
The annotation M value is not backed by a digital timestamp, so a forger could put anything there.
Furthermore, the M entry of a signature dictionary also is optional, and by itself it also cannot be trusted.
Thus, no, this represents no means to validate anything.
However, I noticed that dictionaries with type "XObject" and subtype "Form" do not have an "M" value - i.e. do not store the time at which said form was modified. In such cases, how do pdf readers validate whether a change to form field is allowed (for eg, in a digital sign where no changes are allowed, no form field can be changed after the digital sign is done - how is this verified?)?
First of all, as explained above, the M values cannot be used at all for modification detection, so whether some objects do or do not have them, is irrelevant.
Furthermore, a form XObject by itself is not a form field meant by the document modification detection and prevention settings of a signature. The form fields meant are AcroForm form fields (or, in a deprecated special case, XFA form fields). A form XObject may be used as appearance of an AcroForm form field but in that case the pivotal check point is the form field itself.
How to Validate Changes
(For some backgrounds on PDF signatures you may want to read this answer first.)
Depending on the document modification detection and prevention (DocMDP) settings of the signatures of a document only certain changes are allowed to a document, see this answer.
But even the allowed changes may not be applied by changing the original objects in the PDF. That would after all change the digest over the originally signed byte ranges and so invalidate the signature. Thus, the changed and added objects are appended to the PDF, capped off by a cross reference table or streams for these objects.
Thus, what you have to do for DocMDP validation, is determining the extend of the signed revision in the PDF file, finding out that way what has been appended, and analyzing whether those additions change the signed revision in allowed or disallowed ways.
While this may sound simple at first, it is not, in particular because "allowed" and "disallowed" changes are characterized by their effects on document appearance and behavior, not by the actual PDF objects that may be affected.
Here currently ETSI working groups are attempting to transform those characterizations into criteria for PDF objects; the results are to be published as ETSI TS 119 102-3, probably in multiple parts.
Some Details
In comments you asked
how do you tell from the appearance of a modified object, whether it was added before or after the digital sign?
Well, as mentioned above:
First you determine the extend of the signed revision in the PDF file.
I.e. you take the ByteRange entry of the signature dictionary and take the start of the lower range and the end of the higher range. E.g. if that entry is
[ 0 67355 72023 6380 ]
the the signed revision starts at offset 0 and ends at offset 72023+6280-1=78302 inclusively.
(Obviously some sanity checks are indicated, in particular that the start offset is 0, that the gap in the signed byte ranges exactly contains the signature dictionary Contents value, that the signed revision as a whole is a valid PDF and all its cross references point to indirect objects completely contained in that signed revision, and that that signed revision indeed is a previous revision of the whole PDF, i.e. that the chain of cross reference streams or tables contains the cross reference stream or table of that revision.)
Then you find out that way what has been appended.
I.e. you compare the cross reference stream or table of the whole file and the cross reference stream or table of the signed revision.
If some object is referenced for an object number now but was not referenced for that object number in the signed revision, you found a change to check.
(Strictly speaking you should iterate along the chain of cross reference streams or tables from the signed revision to the whole file, i.e. revision by revision, and check the changes in each revision.)
For this procedure you obviously have to use the original file, not some version uncompressed by tools like qpdf, otherwise you cannot do the offset tests.
Is it possible for an attacker to add a new annotation object before the xref table corresponding to the digital signature, and adjust the previous xref table values, so that a broken document passes as an accepted document?
No. The signed revision including its cross references is covered by the signature. Manipulating those bytes will invalidate the signature mathematically.

Determining gnuradio block input and output types directly from block

I know that from their input and output signatures, it's possible to determine the size of a gnuradio block's input and output items. I am wondering whether it's also possible to directly determine input and output type (float vs complex etc) from a block.
Within the GNU Radio runtime, only the size is stored. Type information only exists in the source code, and in GNU Radio Companion if you are using that.
So, no, you cannot get type information from a block object that already exists — except by imperfect outside-information strategies like looking up the block's name in the installed GRC data files to guess what the type is.
Because there are no types but only sizes, items can be reinterpreted if they are the same size, which may occasionally be useful; for example, you can connect a block producing "complex" to one expecting "vector of 2 floats" and get a useful result since a complex is represented as two floats.

API with an non define number of parameters

I am building an API where I allow the users to passes 2 different list of custom fields.
The API is basically this:
def action(type, name, date, name_custom_1, name_custom_2, name_custom_3, date_custom_1, date_custom_2, date_custom_3, date_custom_4)
So type, name date are parameter of this API and are mandatory.
name_custom_*, and date_custom_* are optionals, I could have 0, 1, 2, 3 ...
I am putting a limit to 3 for name_custom and 4 to date_cutom for technical reasons, but eventually this limit can get increased (but never extremely will never be completely remove)
Now my question is, what is the best way to make this API, from a user point of view:
def action(type, name, date, name_custom_1, name_custom_2, name_custom_3, date_custom_1, date_custom_2, date_custom_3, date_custom_4)
or
def action(type, name, date, names_custom, dates_custom):
Where names_custom and dates_custom are a list which can not be bigger than X.
I am struggling between both and find value and logic in both. Any suggestions?
The list parameters give a cleaner solution, because:
There are less arguments in the function signature, making the documentation easier to read for humans.
It is more resilient to change. Suppose you decide to change the maximum number of custom arguments from 4 to 5. In the list approach, the change is simpler.
Even having 5 arguments in a function call is more than usual, and often considered sloppy (see How many parameters are too many?). You may want to consider introducing a class, or a few classes in here. Depending on your application, maybe it makes sense to create class that encapsulates the name and the list of custom names, and a class that encapsulates the date and the list of custom dates? And perhaps the action itself is better off being a class with a number of setter methods?
In other words, if your functions become long, or argument lists become long, it is often a sign that there are classes waiting to be discovered underneath your design.

How to identify Drive ID?

The new Google Drive Android API has 2 types of string IDs available, the 'resource' ID and the 'encoded' ID.
'encoded' id from DriveId.encodeToString()
"DriveId:CAESHDBCMW1RVVcyYUZKZmRhakIzMDBVbXMYjAUgssy8yYFRTTNKRU55"
'resource' id from DriveId.getResourceId()
"UW2aFJfdajB3M3JENy00Ums0B1mQ"
In the process I end-up with a string that can contain any one of them (result of some timing issues). My question is:
If I need to 'parse' the string in order to identify the type, is there a characteristic I can rely on? For instance:
'encoded' id will always start with 'DriveId:' substring
'resource' id will have some length limit
can I abuse error return from 'decodeFromString()'?
or should I form (pre-pend) the string container with my own tag? What could be the minimal 'safe' tag (i.e. what will never appear in the beginning of these ids) ?
Please point me in the right direction so I don't have to re-do it with the next release.
I have run into yet another issue that should be mentioned here so others don't waste time falling into the same pit. The 'resourceID' can be ported and will remain unique for the object it identifies, where 'encodedID' has only 'device' scope. Means that you CAN'T transfer 'encodedID' to another device (with the same account) and try to retrieve file/folder with it. So I assume it is unique to a Google Play Services instance.
Please do not rely on any formatting of either ID type. This are subject to change without notice.
If you need to use both, and track the differences between them you should have your own method of doing so within your app.
Really, you should probably always just store the encoded ID since this one is always guaranteed to present, and if it contains a resourceId, its easy to get back out.