How does labview distinguish between array size info and the array data? - labview

This isn't really a question about how to do something, more just to satisfy my curiosity.
According to this, Labview stores arrays in memory as a series of int32s describing the size of each dimension followed by the actual data. So, e.g., a 2-d array of size 3x5 would be stored as
0: 3
4: 5
8: data starts here
Now suppose you have an array of int32s. How would labview tell the difference between the actual data and the array size information? In the example above, for instance, how does labview know it's a 3x5 array and not a 1-d array of length 3 and then just ignore the remaining elements? Sorry if there is something obvious that I am missing.

If you look at the LabVIEW KB Article How LabVIEW stores data in memory, you'll see that every data-type is stored with type information. For an array it first stores an I32 for each dimension, followed by the flattenend data.
The actual data-type is stored in it's type-descriptor, it consists of a list of the different contained type descriptors. For an array the minimum is two:
The array
The data in the array
The array's type descriptor is
<nn> xx40 <k> <k dims> <k elems> <element type descriptor>
where nn is the total data-packet size
xx40 is the array datatype
k is the total number of dimensions
For the contained I32 the type descriptor is:
0004 xx03 xx
0004 is the length of the type descriptor
03 is the I32 type identifier
However it's has been changed between LabVIEW 7 and 8. Relying on the type descriptor is something you shouldn't mess with yourself. Let LabVIEW handle this.

When references to data are passed around in LabVIEW internally, the data type is always passed around, too. Data is passed around as void pointers and the type is passed along with them. So any time LabVIEW sees your array, it'll also see that the type is a 2d array of int32s. (I work on the LabVIEW team at National Instruments)

Related

Stan variable declarations: difference in use between var_type var_name[length] and vector[length] var_name

I am new at Stan and I'm struggling to understand the difference in how different variable declaration styles are used. In particular, I am confused about when I should put square brackets after the variable type and when I should put them after the variable name. For example, given int<lower = 0> L; // length of my data, let's consider:
real N[L]; // my variable
versus
vector[L] N; // my variable
From what I understand, both declare a variable N as a vector of length L.
Is the only difference between the two that the first way specifies the variable type? Can they be used interchangeably? Should they belong do different parts of the Stan code (e.g., data vs parameters or model)?
Thanks for explaining!
real name[size] and vector[size] name can be used pretty interchangeably. They are stored differently internally, so you can get better performance with one or the other. Some operations might also be restricted to one and the other (e.g. vector multiplication) and the optimal order to loop over them changes. E.g. with a matrix vs. a 2-D array, it is more efficient to loop over rows first vs. columns first, but those will come up if you have a more specific example. The way to read this is:
real name[size];
means name is an array of type real, so a bunch of reals that are stored together.
vector[size] name;
means that name is a vector of size size, which is also a bunch of reals stored together. But the vector data type in STAN is based on the eigen c++ library (c++) and that allows for other operations.
You can also create arrays of vectors like this:
vector[N] name[K];
which is going to produce an array of K vectors of size N.
Bottom line: You can get any model running with using vector or real, but they're not necessarily equivalent in the computational efficiency.

External function to user displayed image

So I have an external C function that returns a pointer to an array. I'm trying to figure out how to convert the pointer to something that can be be displayed on the screen using the latest version of LabView (2019, assume I have all the toolkits).
The C function signature imports fine and is designed to display 16 bit images.
STATUS DemoImage(unsigned short** ptr, int64* rows, int64* columns, int64 image_idx)
with ptr eventually filling a pointer containing the memory location to 16 bit image. rows, columns work as expected.
Whats the name of the controllers that convert the data type to something that can be displayed? I'd also appreciate answers that only address how to display a 8 bit images as I can convert them in my own library if worst, comes to worst.
There is a vi.lib VI not on the palette that you can use: GetValueByPointer.
Detailed walkthrough
For the step-by-step explanation, see this NI document.
2D arrays are represented as an array of arrays. Since an array is really a pointer, a 2D array is a pointer to an array of pointers, where each pointer points to the individual rows of the array. So in order to dereference a 2D Array, you must first dereference the individual pointers to each row, and then dereference in individual elements in each row. The following snippet shows an example of this:
Download examples
For a download with examples, see this one instead, section 4.d.
Returning values by reference (pass by ref)
Function:void ReturningValuesByReference_2DArrayOfIntegers (int rows, int cols, int ***newArray);
VI:Returning Values By Reference 2D Array Of Integers Complete.vi

maxDisplays of CGGetDisplaysWithPoint

Definition:
As defined here, CGGetDisplaysWithPoint takes 4 parameters:
A CGPoint object
An int32 representing the maximum number of displays returned
A mutable array passed by reference, which will be filled with the displayIDs found.
An int32 representing the matching display count
Syntax:
CGError CGGetDisplaysWithPoint(CGPoint point, uint32_t maxDisplays, CGDirectDisplayID *displays, uint32_t *matchingDisplayCount);
This is fine and I can get this function working however I am quite confused as to how I should deal with the maxDisplays parameter?
As I understand it, if I set maxDisplays to 5 then if someone has 6 displays, there is a 1/6 chance that a randomly selected pixel will find no displays?
So do we just set maxDisplays to something unrealistic, like 99, and release the array afterwards? What's the point in this argument?
The point of the argument is to prevent the function from writing past the end of your array. You have to tell it the capacity of the array. Note that the displays parameter is neither a Cocoa nor Core Foundation mutable array object. It's a C-style array. It's "mutable" in the sense that it's not "const", but it's not an object that manages its own storage. You are responsible for managing that storage and must communicate its capacity to any function that is intended to store data in it (or otherwise guarantee that such function won't overrun it).
So, your question should really be how to decide on the capacity of the array. There are two basic approaches:
1) Call the function passing NULL for the displays parameter and any arbitrary value (best to use 0) for maxDisplays. As documented, when displays is NULL, maxDisplays is ignored and the function outputs via matchingDisplayCount the number of displays whose bounds contain the given point. Then, allocate an array with (at least) that many elements to use to receive the display IDs and call the function again, passing that array for displays and its capacity for maxDisplays.
2) Use an array with capacity of 32. It's not explicitly documented but it's implicit in the API that that's the maximum number of supported displays. A display ID can be converted to an OpenGL display mask using CGDisplayIDToOpenGLDisplayMask(). The type CGOpenGLDisplayMask is used to hold OpenGL display masks. It is defined as uint32_t, a 32-bit value. Therefore, there can be at most 32 active displays.
This technique is used in some Apple docs, like here, here, here, and here. That last one even makes a direct connection between the number of bits in CGOpenGLDisplayMask and the maximum number of displays.

What does PACK8/16/32 mean in VkFormat names?

I'm trying to understand the names of the items in the VkFormat enum, and so far I think I get all the structure of the names of all of the (non-block) formats, but I can't figure out what it means when they have a suffix of PACK8, PACK16, PACK32. If I add up the channel sizes, they always add up to 8, 16, or 32, nothing irregular, so I don't understand what it would mean to bit-pack these values, since they seem to be 100% efficient, using all their bits.
As usual, the documentation is not very helpful, just saying the format is packed without saying what that means.
The PACK fields mean exactly what the specification says they mean:
whole texels or attributes are stored in a single data element, rather than individual components occupying a single data element
Though if you find that too confusing, you could just look at the actual format descriptions. Vulkan goes into excruciating detail about them, to the point of needless repetition.
The difference between VK_FORMAT_B8G8R8A8_RGB and VK_FORMAT_B8G8R8A8_RGB_PACK32 is the same difference between a uint8_t[4] and a uint32_t. One is an array ("individual components"), while the other is a single value ("single data element") made up of smaller values.
If you have a uint8_t color[4] array, which stores B8G8R8A8, then color[0] stores the blue component. The order of the components in the array is defined by the order of the components in the format's name.
If you have a uint32_t color value, which stores B8G8R8A8, then (color & 0xFF000000) >> 24 will retrieve the blue component. The highest byte is the first, followed by the next highest and so forth.
The reason the packed-vs-not-packed distinction matters is because of endian issues. Arrays of bytes don't have endian issues. But values packed into 16 or 32-bits do have endian issues. The endian of the packed formats is always assumed to be the native endian of the host.

Converting meshes to metaballs

I'm doing a project where I need to convert an existing polygonal mesh into a static shape made from metaballs (blobs). I have voxelized the mesh with binvox to "a .raw file" (according to the description at binvox), but I have no clue of how it stores the data, and therefore don't know how to load it.
Question1: Is there any non PHD way to do so? Create a metaball model from a polygonal mesh.
Question2: Has anyone ever used the said .raw file format from binvox and if you did, how?
RLE Run length Encoding
The binary voxel data
The binary data consists of pairs of bytes. The first byte of each pair is the value byte and is either 0 or 1 (1 signifies the presence of a voxel). The second byte is the count byte and specifies how many times the preceding voxel value should be repeated (so obviously the minimum count is 1, and the maximum is 255).
http://www.cs.princeton.edu/~min/binvox/binvox.html