LabVIEW: How to copy from one array to another array? - labview

I would like to deep copy one array to another array. What is the best way to do it ?
I have attempted this way and it seems to work. I would like to deep copy it.
Thanks

Perhaps you're used to a different language where everything is done by reference, but you don't need to do any of this in LabVIEW. LabVIEW automatically copies data on a wire when necessary, but not when it isn't necessary.
The only thing your code is doing is creating an array with an extra dimension, because inside your loop you're building each scalar value into a 1D array with one element, then passing that array to an indexing array terminal which builds an array of the data that is wired to it - since you're passing a 1D array in, you get a 2D array out. However you could have got exactly the same result, if that's what you really wanted, by wiring your original array to a Build Array function then reshaping it from 1 x n to n x 1 using Reshape Array:
If you're worried about memory allocations, which you shouldn't need to be unless your code is actually running out of memory or running too slowly, you can see where LabVIEW will and won't make a copy by choosing Tools > Profile > Show Buffer Allocations. This adds a little black dot to any terminal, of one of the data types you select, where a new memory buffer has had to be allocated. If you do this for the code above you'll see that building an array from lower-dimensional data needs a new buffer, but reshaping an array doesn't.
If you have a very special case where you need to force LabVIEW not to allocate a buffer you can use an In Place Element Structure. But for the vast majority of programming you don't need to think about any of this: just let LabVIEW take care of it for you.
In the meantime I suggest you read the tutorial on loops.

Related

Labview: save controls under a certain size

I would like to log the values of all controls and indicators on a VI. I can do this by using the invoke node
ctrl val.get all
followed by saving the array of name/variant data clusters to disk using datalog vis.
However, I would now like to impose a size limit: I only want to save the data if the size is under a threshold (e.g. 100 kb) to avoid generating huge files (for instance if the front panel contains an image). I want this function to be generic, so I can't create a list of control names to exclude or sort by control data type.
It seems that one way would be to flatten the variant data to a string and then measure the size of the string, but this seems potentially problematic if the control contains an inordinately large amount of data (e.g. could end up creating a 1 GB string).
Is there a more refined way to handle this problem?
You probably want to inspect each control type to then have a more efficient way to check the size of that type. Your problem of flattening large strings can then be avoided for any known control types you detect. Arrays, images, waveforms, etc could all be inspected once you know the type, specifically for their size without ever having to flatten the data. This would allow you to save the small stuff, ignore known large stuff and still flatten any unknown or unhandled types to string to determine the size, then it stays generic and could be used for any VI. The openG variant tools (among others) have lots of type inspection pieces to use on the controls so shouldn't be too hard to implement.
Good news (for you): LabVIEW is horrendously inefficient at rendering data onto the front panel (WRT memory).
Displaying data on a control takes something like 10x the memory it would take to flatten that same data to a string. There's not much by way of clever, lazy-loading for arrays or any of that. You can still do size filtering on the flattened string if you want to, but it's a safe bet that if the Front Panel is open, flattening control values (one at a time) to string is going to use a trivial amount of memory compared to the memory it takes just to put that stuff on the front panel.

Vulkan: Is there a way to draw multiple objects in different locations like in DirectX12?

In DirectX12, you render multiple objects in different locations using the equivalent of a single uniform buffer for the world transform like:
// Basic simplified pseudocode
SetRootSignature();
SetPrimitiveTopology();
SetPipelineState();
SetDepthStencilTarget();
SetViewportAndScissor();
for (auto object : objects)
{
SetIndexBuffer();
SetVertexBuffer();
struct VSConstants
{
QEDx12::Math::Matrix4 modelToProjection;
} vsConstants;
vsConstants.modelToProjection = ViewProjMat * object->GetWorldProj();
SetDynamicConstantBufferView(0, sizeof(vsConstants), &vsConstants);
DrawIndexed();
}
However, in Vulkan, if you do something similar with a single uniform buffer, all the objects are rendered in the location of last world matrix:
for (auto object : objects)
{
SetIndexBuffer();
SetVertexBuffer();
UploadUniformBuffer(object->GetWorldProj());
DrawIndexed();
}
Is there a way to draw multiple objects with a single uniform buffer in Vulkan, just like in DirectX12?
I'm aware of Sascha Willem's Dynamic uniform buffer example (https://github.com/SaschaWillems/Vulkan/tree/master/dynamicuniformbuffer) where he packs many matrices in one big uniform buffer, and while useful, is not exactly what I am looking for.
Thanks in advance for any help.
I cannot find a function called SetDynamicConstantBufferView in the D3D 12 API. I presume this is some function of your invention, but without knowing what it does, I can only really guess.
It looks like you're uploading data to the buffer object while rendering. If that's the case, well, Vulkan can't do that. And that's a good thing. Uploading to memory that you're currently reading from requires synchronization. You have to issue a barrier between the last rendering command that was reading the data you're about to overwrite, and the next rendering command. It's just not a good idea if you like performance.
But again, I'm not sure exactly what that function is doing, so my understanding may be wrong.
In Vulkan, descriptors are generally not meant to be changed in the middle of rendering a frame. However, the makers of Vulkan realized that users sometimes want to draw using different subsets of the same VkBuffer object. This is what dynamic uniform/storage buffers are for.
You technically don't have multiple uniform buffers; you just have one. But you can use the offset(s) provided to vkCmdBindDescriptorSets to shift where in that buffer the next rendering command(s) will get their data from. So it's a light-weight way to supply different rendering commands with different data.
Basically, you rebind your descriptor sets, but with different pDynamicOffset array values. To make these work, you need to plan ahead. Your pipeline layout has to explicitly declare those descriptors as being dynamic descriptors. And every time you bind the set, you'll need to provide the offset into the buffer used by that descriptor.
That being said, it would probably be better to make your uniform buffer store larger arrays of matrices, using the dynamic offset to jump from one block of matrices to the other. You would tehn
The point of that is that the uniform data you provide (depending on hardware) will remain in shader memory unless you do something to change the offset or shader. There is some small cost to uploading such data, so minimizing the need for such uploads is probably not a bad idea.
So you should go and upload all of your objects buffer data in a single DMA operation. Then you issue a barrier, and do your rendering, using dynamic offsets and such to tell each offset where it goes.
You either have to use Push constants or have separate uniform buffers for each location. These can be bound either with a descriptor per location of dynamic offset.
In Sasha's example you can have more than just the one matrix inside the uniform.
That means that inside UploadUniformBuffer you append the new matrix to the buffer and bind the new location.

Does add method of LinkedList has better performance speed than the one of ArrayList

I am writting a program in java for my application and i am concerned about speed performance . I have done some benchmarking test and it seems to me the speed is not good enough. I think it has to do with add ang get method of the arraylist since when i use jvm and press snapshot it tells me that it takes more seconds add and get method of arraylist.
I have read some years ago when i tool OCPJP test that if you want to have a lot of add and delete use LinkedList but if you want fast iteration use ArrayList. In other words use ArrayList when you will use get method and LinkedList when you will use add method and i have done that .
I am not sure anymore if this is right or not?!
I would like anybody to give me an advise if i have to stick with that or is there any other way how can i improve my performance.
I think it has to do with add ang get method of the arraylist since when i use jvm and press snapshot it tells me that it takes more seconds add and get method of arraylist
It sounds like you have used a profiler to check what the actual issues are -- that's the first place to start! Are you able to post the results of the analysis that might, perhaps, hint at the calling context? The speed of some operations differ between the two implementations as summarized in other questions. If the calls you see are really called from another method in the List implementation, you might be chasing the wrong thing (i.e. calling insert frequently near one end of an ArrayList that can cause terrible performance).
In general performance will depend on the implementation, but when running benchmarks myself with real-world conditions I have found that ArrayList-s generally fit my use case better if able to size them appropriately on creation.
LinkedList may or may not keep a pool of pre-allocated memory for new nodes, but once the pool is empty (if present at all) it will have to go allocate more -- an expensive operation relative to CPU speed! That said, it only has to allocate at least enough space for one node and then tack it onto the tail; no copies of any of the data are made.
An ArrayList exposes the part of its implementation that pre-allocates more space than actually required for the underlying array, growing it as elements are added. If you initialize an ArrayList, it defaults to an internal array size of 10 elements. The catch is that when the list outgrows that initially-allocated size, it must go allocate a contiguous block of memory large enough for the old and the new elements and then copy the elements from the old array into the new one.
In short, if you:
use ArrayList
do not specify an initial capacity that guarantees all items fit
proceed to grow the list far beyond its original capacity
you will incur a lot of overhead when copying items. If that is the problem, over the long run that cost should be amortized across the lack of future re-sizing ... unless, of course, you repeat the whole process with a new list rather than re-using the original that has now grown in size.
As for iteration, an array is composed of a contiguous chunk of memory. Since many items may be adjacent, fetches of data from main memory can end up being much faster than the nodes in a LinkedList that could be scattered all over depending on how things get laid out in memory. I'd strongly suggest trusting the numbers of the profiler using the different implementations and tracking down what might be going on.

Getting page aligned memory in numpy

Is there a way to allocate the data section (i.e. the data) of a numpy array on a page boundary?
For why I care, if I were using PyOpenCL on an Intel device, and I wanted to create a buffer using CL_MEM_USE_HOST_PTR, they recommend that the data is 1) page aligned and 2) size a multiple of a cache line.
There are various ways in C of allocating page aligned memory, see for example: aligned malloc() in GCC?
I'm not aware that Numpy has any explicit calls to align memory at this time. The only way I can think of doing this, short of Cython as suggested by #Saulio Castro, would be through judicious allocation of memory, with "padding", using the numpy allocation or PyOpenCL APIs.
You would need to create a buffer "padded" to align on multiples of 64K bytes. You would also need to "pad" the individual data structure elements you were allocating in the array so they too, in turn, were aligned to 4k byte boundaries. This would of course depend on what your elements look like, whether they were built in numpy data types, or structures created using the numpy dtype. The API for dtype has an "align" keyword but I would be wary of that, based on the discussion at this link.
An old school trick to align structure is to start with the largest elements, work your way down, then "pad" with enough uint8's so one or N structs fill out the alignment boundary.
Hope that's not too vague...

Accessing the most recent data from a shift register

I am fairly new to LabVIEW so please bear with me. I am working on a piece of code where I am reading data (in the form of an array) from a USB device, splitting this array to meet a required size, storing part of this array in a circular buffer and passing the rest of the data in a shift register. The problem I am encountering is that the shift register will save the data from all other iterations, however I simply want the data from the most recent iteration, but I am not sure how to do this in labVIEW. Perhaps the shift register is not my answer here, but I was wondering if anyone might have some suggestions.
Please let me know if this is clear enough.
I should probably mention that I am using LabVIEW 2011.
In the picture above, I am reading data coming from my hardware. This data is read as an array and I split the array to meet a specific size. I then store part of this array in a 2D array, which serves as a circular buffer and the other part of the array is set to a shift register, where with the next iteration this data will combine with the next set of data read back from my hardware.
The problem I am seeing right now, is that the size of my shift register is constantly growing.
I took, Adrian Keister advice and found my problem. CharlesB was correct the shift register does only show the data from the previous iteration. The reason why the contents of my shift register was consistently growing was because I did not account for the next set of data that would be read during each iteration. Well back to the drawing board
I don't know if I understand your problem correctly, but you should probably try using conditional appending of the array. In LabVIEW 2012 this operation is even simplier because of conditional indexing in for loops.
I provided an example here and hope that it helps. Similar condition can be created for your index modulo operation.
http://i.stack.imgur.com/AALLo.jpg