I have three inputs in merge signals in different time, the out put of merge signals appeared to wait for all signals and outputted them. what I want is to have an output for every signal (on current output) as soon as it inputted.
For example: if I write (1) in initial value. 5,5,5 in all three numeric. with 3 sec time delay, I will have 6,7,and 16 in target 1, target 2, and target 3. And over all 16 on current output. I don't want that to appear at once on current output. I want to have as it appears in target with same time layout.
please see attached photo.
can anyone help me with that.
thanks.
All nodes in LabVIEW fire when all their inputs arrive. This language uses synchronous data flow, not asynchronous (which is the behavior you were describing).
The output of Merge Signals is a single data structure that contains all the input signals — merged, like the name says. :-)
To get the behavior you want, you need some sort of asynchronous communication. In older versions of LabVIEW, I would tell you to create a queue refnum and go look at examples of a producer/consumer pattern.
But in LabVIEW 2016 and later, right click on each of the tunnels coming out of your flat sequence and chose “Create>>Channel Writer...”. In the dialog that appears, choose the Messenger channel. Wire all the outputs of the new nodes together. This creates an asynchronous wire, which draws very differently from your regular wires. On the wire, right click and choose “Create>>Channel Reader...”. Put the reader node inside a For Loop and wire a 3 to the N terminal. Now you have the behavior that as each block finishes, it will send its data to the loop.
Move the Write nodes inside the Flat Sequence if you want to guarantee the enqueue order. If you wait and do the Writes outside, you’ll sometimes get out-of-order data (I.e. when the data generation nodes happen to run quickly).
Side note: I (and most LabVIEW architects) would strongly recommend you avoid using sequence structures as much as possible. They’re a bad habit to get into — lots of writings online about their disadvantages.
Related
I need some clarity about something in the Vulkan Spec. Section 8.1 says:
Render passes must include subpass dependencies (either directly or
via a subpass dependency chain) between any two subpasses that operate
on the same attachment or aliasing attachments and those subpass
dependencies must include execution and memory dependencies separating
uses of the aliases, if at least one of those subpasses writes to one
of the aliases. These dependencies must not include the
VK_DEPENDENCY_BY_REGION_BIT if the aliases are views of distinct image
subresources which overlap in memory.
Does this mean that if subpass S0 write to attachment X (either as Color, Depth, or Resolve) and subsequent subpass S1 uses that attachment X (either as Color, Input, Depth, or Resolve), then there must be a subpass dependency from S0->S1 (directly or via chain)?
EDIT 1:
Upon further thinking, the scenario is not just if S0 writes and S1 reads. If S0 reads and S1 writes, that also needs synchronization from S0->S1.
EDIT 2:
I should say that what I was specifically unsure before was with a color attachment that is written by 2 different subpasses. Assuming that subpasses don't have a logical dependency, other than they use the same color attachment, they could be ran in parallel if they used different attachments. Before reading this paragraph, I was under the impression that dependencies were only needed between 2 subpasses if subpass B need some actual data from subpass A, and so needed to wait until this data was available. That paragraphs talks about general memory hazards.
If there is no logical need for 2 subpasses to be ordered in a certain way, the GPU could decide which is the better one to run first. But, if the developer always has to declare dependencies if 2 subpasses touch the same attachment, then that's potential speed lost that only gpu could figure out. It shouldn't be hard for the GPU to figure out that, although 2 subpasses don't have a developer-stated dependency between them, they do read/write to the same attachment, so it shouldn't just mindlessly write to it at the same time from both subpasses. Yes, I mean that the GPU would do some simple synchronization for basic cases, so as to not decapitate itself.
If there is a render pass that has two subpasses A and B, and both use the same attachment, and A writes to the shared attachment, then there is logically an ordering requirement between A and B. There has to be.
If there is no ordering requirement between two operations, then it is logically OK for those two operations to be interleaved. That is, partially running one, then partially running another, then completing the first. And they can be interleaved to any degree.
You can't interleave A and B, because the result you get is incoherent. For any shared attachment between A and B, if B writes to a pixel in that attachment, should A read that data? What if B writes twice to it? Should A read the pre-written value, the value after the first write, or the value after the second write? If A also writes to that pixel, should B's write happen before it or after? Or should A's write be between two of B's writes? And if so, which ones?
Interleaving A and B just doesn't make sense. There must be a logical order between them. So the scenario you hypothesize, where there "is no logical need for 2 subpasses to be ordered in a certain way" does not make sense.
Either you want any reads/writes done by B to happen before the writes done by A, or you want them to happen after. Neither choice is better or more correct than the other; they are both equally valid usage patterns.
Vulkan is an explicit, low-level rendering API. It is not Vulkan's job to figure out what you're trying to do. It's your job to tell Vulkan what you want it to do. And since either possibility is perfectly valid, you must tell Vulkan what you want done.
Both A & B need 5 color attachments each, but other than the memory, they don't care about each other. Why can't the GPU share the 5 color attachments intelligently between the subpasses, interleaving as it sees fit?
How would that possibly work?
If the first half of A writes some data to attachments that the second half of A's operations will read that data, B can't get in the middle of that and overwrite that data. Because then the data will be overwritten with incorrect values and the second half of A won't have access to the data written by the first half.
If A and B both start with clearing buffers (FYI: calling vkCmdClearAttachments at all should be considered at least potentially dubious), then no interleaving is possible. since they first thing they will do is overwrite the attachments in their entirety. The rendering commands within those subpasses expect the attachments to have known data, and having someone come along and mess with it will break those assumptions and yield incorrect results.
Therefore, whatever these subpasses are doing, they must execute in their entirety before the other. You may not care what order they execute in, but they must entirely execute in some order, with no overlap between them.
Vulkan just makes you spell out what that order is.
I am a new labview user. I am trying to implement a controller in real time using labview. For that purpose, I started doing analouge input output exercise. As a part of learning process, I was trying to apply input on a system, get the data and feed it back through analouge channel. However, I noticed there is a signficant delay between input and output, It was about 1 ms. Then i thought of doing the simplest exercise. I generated an input signal read it through labview and feed it back again. So, basically its a task for ADC and DAC only. But still, it has the same amount of delay. I was under impression that if i do hardwared time data read and write, it would reduce the delay. But still, no change.
Could you help me out on this issue? Any kind of advice would be really helpful for me
What you're seeing might be the result of the while loop method you're using.
The while loop picks up data from the AI once every time it goes around and then sends that to the AO. This ends up being done in batches (maybe with a batch size of 1, but still) and you could be seeing a delay caused by that conversion of batches.
If you need more control over timing, LabVIEW does provide that. Read up on the different clocks, sample clocks and trigger clocks for example. They allow you to preload output signals into a niDAQ buffer and then output the signals point by point at specific, coordinated moments.
I'm using a HackRF One device and its corresponding osmocom Sink block inside of gnuradio-companion. Because the input to this block is Complex (i.e. a pair of Floats), I could conceivably send it an enormously large value. At some point the osmocom Sink will hit a maximum value and stop driving the attached HackRF to output stronger signals.
I'm trying to figure out what that maximum value is.
I've looked through the documentation at a number of different sites, for both the HackRF One and the osmocom source and can't find an answer. I tried looking through the source code itself, but couldn't see any clear indication there, although I may have missed something there.
http://sdr.osmocom.org/trac/wiki/GrOsmoSDR
https://github.com/osmocom/gr-osmosdr
I also thought of deriving the value empirically, but didn't trust my equipment to get a precise measure of when the block started to hit the rails.
Any ideas?
Thanks
Friedman
I'm using a HackRF One device and its corresponding osmocom Sink block inside of gnuradio-companion. Because the input to this block is Complex (i.e. a pair of Floats), I could conceivably send it an enormously large value.
No, the complexes z must meet
because the osmocom sink/the underlying drivers and devices map that -1 – +1 range to the range of the I and Q DAC values.
You're right, though, it's hard to measure empirically, because typically, the output amplifiers go into nonlinearity close to the maximum DAC outputs, and on top of that, everything is frequency-dependent, so e.g. 0.5+j0.5 at 400 MHz doesn't necessarily produce the same electrical field strength as 0.5+j0.5 at 1GHz.
That's true for all non-calibrated SDR devices (which, aside from your typical multi-10k-Dollar Signal Generator, is everything, unless you calibrate for all frequencies of interest yourself).
My question is with respect to a labVIEW VI (2013), I am trying to modify. (I am only just learning to use this language. I have searched the NI site and stackoverflow for help without success, I suspect I am using the incorrect key words).
My VI consists of a flat sequence one pane of which contains a while loop where integer data is collected from a device and displayed on a graph.
I would like to be able to be able to buffer this data and then send it to disk when a preset number of samples have been collected. My attempts so far result in only the last record being saved.
Specifically I need to know how to save the data in a buffer (array) then when the correct number of samples are captured save it all to disk (saving as it is captured slows the process down to much).
Hope the question is clear and thanks very much in advance for any suggestions.
Tom
Below is a simple circular-buffer that holds the most recent 100 readings. Each time the buffer is refilled, its contents are written to a text file. Drag the image onto a VI's block diagram to try it out.
As you learn more about LabVIEW and as your performance and multi-threaded needs increase, consider reading about some of the LabVIEW design patterns mentioned in the other answers:
State machine: http://www.ni.com/tutorial/7595/en/
Producer-consumer: http://www.ni.com/white-paper/3023/en/
I'd suggest to split the data acquisition and the data saving in two different loops using a producer/consumer design pattern..
Moreover if you need a very high throughput consider using TDMS file format.
Have a look here for an overview: http://www.ni.com/white-paper/3727/en/
Screenshot will definitely help. However, some things are clear:
Unless you are dealing with very high volume of data, very slow hard drives or have other unusual requirements, open the file before your while loop, write to it every time you acquire a sample (leaving buffering to the OS), and close it afterwards.
If you decide you need to manage buffering on your own, you can use queues. See this example: https://decibel.ni.com/content/docs/DOC-14804 for reference (they stream data from disk, buffering it in the queue, but it is the same idea)
My VI consists of a flat sequence one pane of which
Substitute flat sequence for finite state machine (e.g. http://forums.ni.com/t5/LabVIEW/Ending-a-Flat-Sequence-Inside-a-case-structure/td-p/3170025)
I have a flow graph with a file source (with repeat off) and a GUI Time Sink. The graph is throttled by a throttle block at 2 samples / sec. I expect to see two new samples in my GUI Time Sink every second. However, instead of 1-second updates, the GUI Time Sink doesn't display anything at all. If I turn repeat on on the file source, the GUI Time Sink does update. Why doesn't it update when repeat is off?
My question is similar to this one. In my case, I also have a file source throttled down to a very slow sample rate. However, my sink is a GUI Time Sink, not a file sink--I see no option for an "Unbuffered" parameter on the Time Sink.
My flow graph
Repeat off
Repeat On
This is actually multiple problems in one:
You're assuming the time sink will show two new values when they come in. That's not true: it will only update the display when it has (at least) as many new items as you configured it to show in number of points.
You're assuming GNU Radio will happily read single items (or two) at a time. Typically, that is not the case: it will ask the file source for as many items as there is space in the output buffer, something like 8192 (not fixed). And typically,
Throttle doesn't work like you think. It takes the number of input samples it gets in each call to its work function (e.g. 8192) and divides that number by the throttle rate you set and then just blocks for that amount of seconds. Throttle regulates the average rate, on a longer time scale, or in your really minimal rate case, a very long time scale.
You can limit the number of items in an output buffer, but not below a page size (4kB); for complexes that is 1024 items at least.
I think the classical graphical GNU Radio sinks might just not be the right thing to analyze files sample-by-sample.
I recommend trying the example flow graphs that come with Tim O'Shea's gr-pyqt. They are very handy for this kind of analysis.