Dropping samples with tagged streams - gnuradio

I'm trying to accomplish the following behavior:
I have a continuous stream of symbols, part of which is pilot and part is data, periodically. I have the Correlation Estimator block that tags the locations of the pilots in the stream. Now, I would like to filter out the pilots such that following blocks will receive data only, and the pilots will be discarded by the tag received from the Correlation Estimator block.
Are there any existing blocks that allow me to achieve this? I've tried to search but am a bit lost.

Hm, technically, the packet-header Demux could do that, but it's a complex beast and the things you need to do to satisfy its input requirements might be a bit complicated.
So, instead, simply write your own (general) block! That's pretty easy: just save your current state (PASSING or DROPPING) in a member of the block class, and change it based on the tags you see (when in PASSING mode, you look for the correlator tag), or whether you've dropped enough symbols (in DROPPING mode). A classical finite state machine!

Related

Why vulkan can't support same time use multi subpass and second commandbuffers?

Why subpass contents can't at the same time support VK_SUBPASS_CONTENTS_INLINE and VK_SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS? I want to use gbuffer and second command buffers to render scene.
TL;DR: Because the specification says so. Put your inline commands into one or more separate secondary command buffers.
Long version:
Why subpass contents can't at the same time support VK_SUBPASS_CONTENTS_INLINE and VK_SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS?
If you're asking why you literally can't combine them, it's because they're not bit flags, but a sequence. Bit flags, like VkBufferUsageFlagBits will typically have values that each represent a single bit in a 32 bit value.
Sequences like VkSubpassContents have values that start at 0 and increment by 1 each time (although extension provided values will often jump ahead).
Since VK_SUBPASS_CONTENTS_INLINE is literally 0, there's no way to combine it with VK_SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS, which is literally 1.
If you're asking why VkSubpassContents is a sequence and not a bit flag, that's just the way the specification is. It might seem like having a subpass include both inline commands and secondary buffers might be trivial, but it probably only seems that way to people using the API, as opposed to be people who have to implement the backend. Likely it either created some potential ambiguity, or would have made some threading edge case a nightmare to implement, or something similar.
I want to use gbuffer and second command buffers to render scene.
As Nicol points out in his comments, there's nothing stopping you from doing that. Whatever inline commands you're trying to use along with your secondary command buffers, you can just put into another secondary buffer. If this is somehow problematic because you're interleaving lots of inline statements with where you want to execute your secondary buffers, well that sounds more like a design problem, like maybe you're trying to execute work in a subpass that doesn't belong there.

UML activity diagrams: the meaning of <<iterative >>

I want to check what is the definition of «iterative» in expansion regions in activity diagrams. For me personally this was never a question because I understand it as letting me do a For loop, e.g.,
For i=1 to 10
Do-Something // So it does it 10 times
End For
However, while I was presenting my UML diagram to an audience, an engineer team leader (not a UML maven) objected against the term ‘iterative’, because he understood ‘iterative’ to mean an 'iterative process' such that each step improves a result. I am also aware of this definition, but I assume the UML definition is not that, but rather means a simple For-Loop.
Please confirm that the UML definition of «iterative» and iteration is like a simple For-loop. Or otherwise, if so.
No, it has a different meaning. UML 2.5 states in p. 480:
The mode of an ExpansionRegion controls how its expansion executions proceed.
If the value is iterative, the expansion executions must occur in an iterative sequence, with one completing before another can begin. The first expansion execution begins immediately when the ExpansionRegion starts executing, with subsequent executions starting when the previous execution is completed. If the input collections are ordered, then the expansion executions are sequenced in the order induced by the input collection. Otherwise, the order of the expansion executions is not defined.
Other values for this keyword are parallel and stream. You can guess that behavior defined in a parallel region can be executed in parallel. stream is a bit more complicated and you might read on that page in the UML spec.
The for-loop itself comes from the input collection you pass to the region. This can be processed in either of the above ways.
tl;dr
So rather than a for loop the keyword «iterative» for the region tells that it's behavior may not be handeled in parallel.
Ahhh, semantics...
First a disclaimer - I am not a native English speaker. Yet my believe both my level of English and IT experience are sufficient to answer this question.
Let's have a look at the dictionary definition of iterative first:
iterative adjective
/ˈɪtərətɪv/
/ˈɪtəreɪtɪv/, /ˈɪtərətɪv/
​(of a process) that involves repeating a process or set of instructions again and again, each time applying it to the result of the previous stage
We used an iterative process of refinement and modification.
an iterative procedure/method/approach
The highlight with a script font is mine.
Of course this is a pure word definition, not in the context of software development.
In real life a process can quite easily be considered repetitive but in itself not really iterative. Imagine an assembly line in a mass production factory. On one of the positions a particular screw/set of screws is applied to join two or more elements. For every next run, identical set of elements the same type and number of screws is applied. There is a virtually endless stream of similar part sets, each set consisting of the same type of parts as previously and requiring the same kind of connection. From the position perspective joining the elements is a repetitive process but it is not iterative, as each join is applied to a different set of elements - it does not apply to those already joined.
If you think of a code, it's somewhat different though. When applying a loop, almost always you have some sort of a resulting set impacted by it and one can argue that with every loop step that resulting set is being further changed, meaning the next loop step is applied on the result of the previous step. From this perspective almost every loop is iterative.
On the other hand, you can have a loop like that:
loop
wait 10
while buffer is empty
read buffer
You can clearly say it is a loop and nothing is being changed. All the code does is waiting for a buffer to fill. So it is not iterative.
For UML specifically though the precise meaning is included in qwerty_so's answer so I will not repeat it here.

Best practice for system design checking a calculation results in JSON

I have a program that reads a JSON file, calculates, and outputs a JSON file on S3.
My question is how I should systematically check the output calculation seems okay?
I understand writing a unit test is a way I should do, but it doesn’t guarantee that the output file is safe. I’m thinking of making another program running on lambda that checks the output JSON.
For example, let’s say the program is calculating dynamic pricing in an area where has upper-bound value. Then I want to make sure all the calculation results in the JSON file don’t exceed the upper bound value or at least I’d like to monitor they are all safe or there are some anomalies.
I want to build an efficient and robust anomaly detection system so I don’t want to build the anomaly check in the same program to avoid single-point failures. Any suggestions are welcomed.
One option is to create a second lambda function with the S3 trigger to fire when the JSON file is written into S3 from the original function.
In this 2nd lambda, you can verify the data and if there is anomaly, you may trigger an SNS or EventBridge event which can be used to log/inform/alert about the issue or may be to trigger a separate process to auto-correct anomalies.
You should use Design by Contracts aka Contract Oriented Programming. Aka preconditions and postconditions.
If the output shall never exceed a certain value, then that is a postcondition of the code producing this value. The program should assert its postconditions.
If some other code relies on a value being bounded, then that is a precondition of that code. The code should assert this precondition. This is a type of Defensive Programming technique.

Do invalid samples count towards a DataReader's history depth QoS?

The KEEP_LAST_HISTORY QoS setting for a DataReader limits the amount of recently received samples kept by the DataReader on a per-instance basis. As documented, for example, by RTI:
For a DataReader: Connext DDS attempts to keep the most recent depth DDS samples received for each instance (identified by a unique key) until the application takes them via the DataReader's take() operation.
Besides valid data samples, in DDS a DataReader will also receive invalid samples, e.g. to indicate a change in liveliness or to indicate the disposal of an instance. My questions concern how the history QoS settings affect these samples:
Are invalid samples treated the same as valid samples when it comes to a KEEP_LAST_HISTORY setting? For example, say I use the default setting of keeping only the latest sample (history depth of 1), and a DataWriter sends a valid data sample and then immediately disposes the instance. Do I risk missing either of the samples, or will the invalid sample(s) be handled specially in any way (e.g. in a separate buffer)?
In either case, can anyone point me to where the standard provides a definitive answer?
Assuming the history depth setting affects all (valid and invalid) samples, what would be a good history depth setting on a keyed (and Reliable) topic, to make sure I miss neither the last datum nor the disposal event? Is this then even possible in general without resorting to KEEP_ALL_HISTORY?
Just in case there are any (unexpected) implementation-specific differences, note that I am using RTI Connext 5.2.0 via the modern C++ API.
I could not verify it since I don't have a licence vor Connext anymore. I also haven't found any explicit specification in the user or api manual. But to answer your first question: I think valid and invalid samples are treated equally when it comes to using the history qos. the reason why I think so is the following code in the on_data_available callback for datareaders.
retcode = fooDataReader_take(
foo_data_reader,
&data_seq,
&info_seq,
DDS_LENGTH_UNLIMITED,
DDS_ANY_SAMPLE_STATE,
DDS_ANY_VIEW_STATE,
DDS_ANY_INSTANCE_STATE);
you can explicitly specify which sample state you wish to receive (in this case any sample state). additionally, the sample info for each sample read with the datareader contains the information if a sample is valid or not. Again, I'm not 100% sure as I couldn't verify it, but I think there is no speciall handling/automatic handling of invalid samples, you handle them like the valid ones through the sample, view and instance state.
regarding the "good value" for history qos: this depends on your application and how frequently data is exchanged and accessed. you'll have to figure it out by trying.
hope this helps at least a little bit.

How to design this particular finite state machine?

I am trying to get my head around how to design the following system, which I think can be defined as a finite state machine:
Say we have a pile of 16 building blocks (towers, walls, gates) together forming a castle. The player can drag the blocks to 16 places on a floorplan and if done right they will see the whole castle. All towers (there's four of them) are equal so they can go on any of the four corners. Same goes for some of the walls.
All in all there are 16 spots on the floorplan where you can put a building block and each of the spots can have 17 "states": empty + either one of the 16 building blocks. Doing some maths this leads to 17^16=a LOT of combinations.
The program starts with an empty floorplan and a pile of building blocks. It should then show a message like "build your own castle, start with the tower". When the user places a tower correctly, it should say "well done, now build all four towers". You get the idea.
Problem is: there are so many things a player can do. Put a block at the wrong place, remove a block, correctly put walls or towers all over the floorplan ignoring the directions given to them, etc.
It would be awesome if I could avoid having to use thousands of if-then statements to decide wether I should take the next step, show an error message or go back to the previous step based on what the player is doing.
How would you describe the NEXT, PREVIOUS and ERROR conditions for every step of the building sequence? Are there any design methods for this? Thanks a lot for your input.
Try to do this declaratively. Define an enum (or possibly classes) describing the kinds of blocks. Define and construct a 4x4 2D array describing the sets of permissible kinds of blocks in each position (implement the sets as lists, bitfields, whatever suits you best). Whenever a player tries to place a block in a position, check whether it is permissible against the 2D array. If you want to have particular messages for a position being correctly filled in, also put those in the same an array.
I don't know if a FSM is really what you are after: what kinds of sequencing constraints are you looking to verify? Does it matter whether towers are built first? From the rest of your description, it sounds like the above goal state description would be more suitable.