Add names to cluster elements - labview

I am interfacing a LabVIEW VI with an Arduino Leonardo that reads a bunch of sensors and ADCs, collects the data, and then sends the result over the serial port in a single comma delimited sentence. My LabVIEW sub-VI takes the sentence and uses the Spreadsheet String To Array function to split it up into a vector of doubles. Since there are currently 20 readings per sentence, I would like to turn the array into a cluster with Array to Cluster and pass the cluster out of the sub-VI.
The problem with this approach is that the elements of the cluster are named [0], [1], etc., which is not helpful. Is there a way, short of unbundling and then rebundling, or indexing each array element and then bundling, to add a name to each element?
I'm using LabVIEW 2009.

You can create your cluster as a constant (preferably a typedef) and typecast the unnamed cluster into the named cluster.
Example:
EDIT
If your number of cluster and data type match you don't even need the type cast.

Related

Contact pressure representation in Abaqus

The main question is connected with extracting the contact pressure from .odb file.
The issue is described in three facts written below:
Imagine that we have simple 3D contact model in Abaqus/CAE
1.If we make a plot of CPRESS on a deformed shape in visualisation module, we'll get a one value of CPRESS for each node. The same (one value for one node) we will get if we request XYdata field output for all frames. And this all seems to be ok, because as far as I know Abaqus CAE use averaging for surface output (CPRESS) to make it possible to request as nodal output.
2.If we use "Probe values" instrument to examine CPRESS value in node, we'll get four values for one node. It still seems to be ok, because, i suppose, it shows the values befor averaging.
3.If we request CPRESS value from command window using this script:
odb.steps['step_name'].frames[frame_number].fieldOutputs['CPRESS'].getSubset(region='node_path').values
length of this vector of CPRESS values in a single node may be from 1 to 6 depending on a chosen node. And the quantity of CPRESS valuse got using this method have no connection with the quantity got using method 2.
So the trick is that I can't inderstand how the vector of CPRESS in node is forming.
Found very little information about this topic in Abaqus Manual.
Hope somebody may help)
Probe Values, extacts the CPRESS values for the whole element. It shows the face number and its node IDs toghether with their corresponding values.

How to know the configured size of the Chronicle Map?

We use Chronicle Map as a persisted storage. As we have new data arriving all the time, we continue to put new data into the map. Thus we cannot predict the correct value for net.openhft.chronicle.map.ChronicleMapBuilder#entries(long). Chronicle 3 will not break when we put more data than expected, but will degrade performance. So we would like to recreate this map with new configuration from time to time.
Now it the real question: given a Chronicle Map file, how can we know which configuration was used for that file? Then we can compare it with actual amount of data (source of this knowledge is irrelevant here) and recreate a map if needed.
entries() is a high-level config, that is not stored internally. What is stored internally is the number of segments, expected number of entries per segment, and the number of "chunks" allocated in the segment's entry space. They are configured via ChronicleMapBuilder.actualSegments(), entriesPerSegment() and actualChunksPerSegmentTier() respectively. However, there is no way at the moment to query the last two numbers from the created ChronicleMap, so it doesn't help much. (You can query the number of segments via ChronicleMap.segments().)
You can contribute to Chronicle-Map by adding getters to ChronicleMap to expose those configurations. Or, you need to store the number of entries separately, e. g. in a file along with the ChronicleMap persisted file.

Best practice to store List<T> in StackExchange.Redis

I am trying to find best practice(efficient) way of storing set of List objects against ReportingDate key.
List could be serailised as Xml/DataContract or ProtoBuf....
And given some of the data could be big (for that slice of key):
I was wondering if there is any of getting data from redis cache in IEnum/streamed fashion? Atm we using ProtoBuf.NET to have file based cache. And we retrieve data into mem in streamed fashion (we also have an option of selecting what props/fields we want in that T object as ProtoBuf allows us to do it)
Is there any way can force (after some inactivity) certain part of the data to be offloaded from mem and back into file if it is not being used. But load it up again if it is called
Tnx
It sounds like you want a sorted set - see https://redis.io/topics/data-types#sorted-sets. You would use the date as the value, perhaps in epoch time (since it needs to be a number). SE.Redis supports all the operations you would expect to get ranges of values (either positional ranges - the first 20 records, etc; or absolute ranges bases on the value - all items between two dates expressed in the same unit). Look at the methods starting " SortedSet...".
The value can be binary, so protobuf-net is fine (you would serialize the value for each date separately). Just pass a byte[] as the value. You need to handle serialization separately to the redis library.
As for swapping data out: no. Redis has date-based expiration, but doesn't have hot and cold storage. It is either there, or it isn't. You could use scheduled tasks to purge or move data based on date ranges, again using any of the Z* (redis) or SortedSet* (SE.Redis) methods.
For the complete list of Z* operations, see: https://redis.io/commands#sorted_set. They should all be available in SE.Redis.

How to map the column wise data in flowfile in NiFi?

i have csv file which having following structure.,
Alfreds,Centro,Ernst,Island,Bacchus
Germany,Mexico,Austria,UK,Canada
01,02,03,04,05
Now i have to move that data into database like below.
Name,City,ID
Alfreds,Germay,01
Centro,Mexico,02
Ernst,Austria,03
Island,UK,04
Bacchus,Canda,05
i try to map those colums but i can't able to extract the data in column wise.
Here my input data in column wise but i need to insert those in row wise in SQLServer
Can anyone suggest way to transfer column wise data into row wise in sql server?.
Thanks
There is no existing Apache NiFi processor to perform column transposition. One of the problems is that this is difficult to do in a streaming manner, as most NiFi components are designed, because in a naïve implementation you need to hold the entire contents of the flowfile in active memory at the same time.
I would recommend using an ExecuteScript processor to do this (here's a 6 line Python example). Be careful doing this because you can easily end up overflowing your heap if it is not set properly/you read unexpectedly large files into memory.
You could write a custom processor which performs a streaming transpose operation by iterating over each of n rows and reading up to your delimiter, storing a byte counter per row, combining the n elements as a single output row, and repeating the process starting from the respective byte counter of each row. (Given m columns, this is O(m * n)).
Another solution would be splitting the CSV input into individual rows using the SplitText processor, using an ExecuteScript or custom processor to transpose a single row into a single column, and then using a custom merge operation (either extend the existing MergeContent processor or write a script to do this) which laterally concatenates the incoming columns into a reconstructed matrix. (O(n) + O(n) + O(m) => O(2n + m) but the individual transposition operations can be performed in parallel so with x threads it's O(n + n/x + m)).
Any of these approaches will require some level of custom development. If you are really hesitant to pursue that, you could try using ExecuteStreamCommand and one of the many bash solutions to do the transposition on the command-line.
#Andy,
It could be possible in NiFi also without using ExecuteScript.
I have extract the 3 input rows as input.1,input.2,input.3 in ExtractText. And then count number of columns in "input.1" using AnydelinateValues in expression language and store that in "TotalCount" Attribute.
Initially made "Count=1".
Using Loop Concept to get the first column by using "Count" and then increment "Count" Check "Count" in RouteOnAttribute
"le(totalcount)"
Now form insert Query with "Count" Attribute.
It worked well for me.It could be useful for someone.

Fast, efficient method of assigning large array of data to array of clusters?

I'm looking for a faster, more efficient method of assigning data gathered from a DAQ to its proper location in a large cluster containing arrays of subclusters.
My current method 1 relies heavily on the OpenG cluster manipulation tools, but with a large data-set the performance is far too slow.
The array and cluster location of each element of data from the DAQ is determined during an initialization phase and doesn't change during acquisition.
Because the data element origin and end points are the same throughout acquisition, I would think an array of memory locations could be created and the data directly assigned to its proper place. I'm just not sure how to implement such a thing.
The following code does what you want:
For each of your cluster elements (AMC, ANLG_PM and PA) you should add a case in the string case structure, for the elements AMC and PA you will need to place a second case structure.
This is really more of a comment, but I do not have the reputation to leave those yet, so here it is:
Regarding adding cases for every possible value of Array name, is there any reason why you cannot use an enum here? Since you are placing it into a cluster anyway, I would suggest making a type-defined enum of your possible array names. That way, when you want to add or remove one, you only have to do it in one place.
You will still need to right-click on your case structures that use this enum and select Add item for every value if you are adding a value, or manually delete the obsolete value if you are removing one. I suppose some maintenance is required either way...