What is the most efficient way to quickly understand how a complex LabView VI works? - labview

What is the best way to understand a complex LabView VI that controls a motor?
My goal is to control the motor from a joystick.
The wiring diagram shown below allows a LabView user to control the motor from the LabView GUI: move a slider up and down either increasing or decreasing the desired velocity. As the slider's value changes, it is fed into a bunch of math controls and eventually gets converted into a command string for the motor to interpret. This command string, if I understand correctly, is bunch of bytes that get written to the serial port.
Instead of using the LabView GUI to control the motor, I would like to use the joystick.
What is the best way to approach this?
The joystick has pitch,yaw,roll,and throttle. Which one relates best to the velocity of a motor?

The answer to your title "What is the most efficient way to quickly understand how a complex LabView VI works?" is probably to do some combination of the following:
Look at the VI's inputs and outputs to try and understand what they are there for. The label and caption of controls and indicators may be helpful, also right-click to check the description and tip.
As well as controls and indicators, look for other I/O: queues, notifiers, global variables, file read/writes, instrument communications, and for any data storage that persists between calls such as an uninitialised shift register.
Look at the overall structure of the VI to see how it executes, e.g. is it a one-off operation, does it execute different cases depending on some input, does it loop until a certain condition happens, does it use a state machine structure, etc
Break down the VI's structure into smaller pieces that you can understand. You could print the diagram out and annotate it by hand, or add frame decorations and text comments to the diagram to record what you deduce. If the diagram is cluttered or poorly laid out, rearrange it as you go along (use Ctrl-click and drag on the diagram background to add blank space where you need it).
Set probes on key wires and watch them while the VI runs to see what happens
If possible, manually set the VI's controls to example values and run it to see what happens (this may not work if the VI depends on other parts of a program running at the same time)
Write a test wrapper VI that calls the complex VI and supplies it with example data or inputs to see what happens.
To address your specific question about the VI diagram you've posted, I can see various controls for quantities such as Velocity, Position, Amplitude, Max A (amplitude?), Frequency and so on. You need to decide which of these quantities should be controlled by which axis or output of your joystick. Then you need to add code that reads those values from your joystick, and modify the existing code so that the parameters you want to control are supplied by the joystick values instead of the front panel controls. You could probably just put the joystick reading code inside the existing loop, wire the joystick outputs to join up with the wires from the front panel controls you want to replace, and then change the relevant front panel controls to indicators from the right-click menu so that they will show the values you are getting from the joystick.

The best way is to write one from scratch. But you could analyse the code by clicking the Highlight Execution button to display an animation of the block diagram execution when you run the VI, and use probes to check intermediate values. And you probably should also do an on-line course, e.g. LabVIEW Training: Learn LabVIEW in Three or Six Hours

My answer to your third question is "throttle.".

Related

LabVIEW - can you use a numeric control as an indicator

I've written LabView code for a locking system.
The lock has a motion timer that relies on input from a numeric control. I've added a script file reader that needs to be able to change that timer value. Using a selector, I can switch between values, but I'd like it to update the value in the control, rather than override it, so that I can see it on the screen.
How can this be accomplished?
This is currently how I switch between the scripted version and the direct numeric input from the control:
So how can I get the script value to update the control box or is that not possible...?
Do you mean something like this? I created a little vi to demonstrate how the control is updated.
In most cases "property nodes" are the way to go. Every control has a lot of different options to chose from and usually if you look through the properties you will find what you're looking for :)
A little hint:
If you want to add "code" to your question so that other users can test it, you can create a .png file. To do this, you need to select the parts of the vi that you want to share, and click on "Edit > Create VI Snippet from Selection". Then you save that generated .png and upload it here as a picture. Then others can drag&drop it into their block diagram.
Important: Check the .png before uploading and make sure that you're not accidentally posting sensitive data of your company.

Calling another VI at runtime

I have created two vi's in LabVIEW: one to acquire serial data and another to plot the acquired data on an XY graph.
The second VI gets called when a Value Change event occurs on a button in the first VI. But the problem is that when the second VI is called the first VI suspends its operation, hence the values don't get updated.
Is there any solution for this?
First VI block diagram:
First VI front panel:
Second VI (ALL DATA) block diagram:
Well, you are doing some nasty stuff with global variables. This works but is not considered as good practice. (Have a look at queues and notifiers). Further, I don't see how your data gets written to those variables...
In any case, put your 2nd VI in a separate while-loop and schedule it to about 100ms (that is usually enough to update front panels or to interact with users. I'm not sure if your button-event is the right way to go. That is exactly because, the second VI waits for the callback. Just use a simple button and a true-false case to let the second VI keep running (this should even be the solution if you don't want to move the case to a second VI). Just make sure that you change the mechanics of the button to be a switch because you're checking its value not at infinite speed and you want to ensure that it gets caught every time, you click it;)
You will need to use the VI Server functionality. The exact method has changed over the years, but I believe the current recommended implementation is to use 'Start Asynchronous Call'
There is an example that you can view using the example finder. To open the example finder navigate to Help>Find Examples. Then select the 'Search' tab and search for 'asynchronous'. Finally select the VI called 'Asynchronous Call and Forget.vi'
There are other variations for asynchronous implementations, but this is probably a good place to start.

Strategy for quick icon generation for labview?

Labview programs become difficult to maintain when block diagrams get too big.
Usage of subvis is recommended to avoid this.
By default, every subvi's icon looks the same, except for a number.
I find that the time needed to creating meaningful icons for most subvis exceeds the coding time by far. Even if using existing images instead of that integrated icon editor - first find a suitable one, then I usually have to scale and adapt it.
Even when settling on using just text in the end, the time needed for icon creation still exceeds the time for programming the vi.
I can see the following strategies to avoid wasting time with icon design:
All in one large vi
Not creating relatively simple subvis with less than approx 20 blocks (adjust number with experience)
Just have the default icon everywhere
I do not like any of these. They do not help with maintainability.
It seems there is a trade-off between maintainability and time required for icon design.
How do people with labview experience solve this?
The right way to create VIs anyway.
I would suggest adding all VIs into a single library then change the icon of the lib to one you like and click the Apply Icon To VIs. This will add the Library Icon to all library functions like a template.
Then you can use VI scripting to programmatically add text on VI icon (For ex. VI Name): http://sine.ni.com/nips/cds/view/p/lang/en/nid/209110
I suggest you take a look on this: https://lavag.org/files/file/100-mark-ballas-icon-editor-v24-lv2010/
It will show you how you can write text on a VI's icon programmatically.
Install GOOP Development suite:
In the menu, click:
Tools->GOOP->Create VI Icon...
Then click 'accept'. 95% of the time, this is appropriate.
The other 5% of the time is used to set up headers based on
library/class/folder
Then GDS offers to update the headers for the other member VI's.
The LabVIEW help includes some simple instructions for creating an icon template and then using that template to create an icon for each new VI you create. I really don't see why either of those steps should take you more than about fifteen seconds!
There's certainly no need to be an artist, or take too much trouble over it, to create a VI icon: all that really matters is that each VI icon is:
identifiable as part of your application - this is why to use a template; and
distinguishable from the other VI icons in your application - you can easily do this with a couple of words of text or a glyph from the included set, even if you choose the latter at random.

How Can I Find Value is Come From Keyboard or Barcoder Reader in VB.NET

I have a Windows Application in VB.NET. I want to know if a value is entered by a user via the keyboard, or if it is coming from a barcode reader. I want to store values that come from
the keyboard in a different database than the ones the come from the barcode reader.
Option 1:
Get a barcode-scanner that is connected to a serial-port (raw serial device read by a COM port). As most barcode-scanners emulate keyboard strokes there is no way to directly distinguish a barcode scanner input from a keyboard input (see next option) without going low-level (see last update).
One connected to a serial port (or emulated one via USB as serial-ports are not so common anymore) gives you full control on where the input comes from.
Option 2:
Count number of chars typed by time. Barcode-scanners inject a sequence (line) pretty fast compared to typing. Measuring the time used in the textbox by counting key-presses (use CR+LF as a measure point as these are sent by the scanner as well) can give you one method to distinguish if a human is typing (unless there is one typing fast as f) or the content was injected. If timed-out just reject/clear the input.
In addition the checksum of the barcode (if you use one that contains that) can be used to do an extra validation in addition to time measurement.
(you can detect pasting by overriding the ctrl + v as in the next option).
Option 3:
Combine option 2 but instead of measure in the textbox tap into the ProcessCmdKey() function (by overriding it) and measure there if textbox has focus. This way you can first buffer input, measure time and if within a set time-out value, inject the line into the textbox.
Option 4:
This might be a good option as well:
http://nicholas.piasecki.name/blog/2009/02/distinguishing-barcode-scanners-from-the-keyboard-in-winforms/
Option 5: a non-technical approach -
Usability improvements: make it visually very clear that bar-codes must be entered with a scanner and not typed. I am including as an option as it is simple and if made correct also effective (there's no right answer of what is correct unfortunately).
Approached could include f.ex. a watermark in the textbox ("Don't type, scan!" or something in that order). Give it a different color, border, size etc. to distinguish it from normal textboxes, and have a help text associated and available at all time that improves clarity.

LINQPad: Anyway to make the Dump() results be initially collapsed?

Couldn't find it anywhere (google or stackoverflow).
Is there a way to force Dump()'s output to be automatically collapsed?
Update:
Some more info, to bring more focus to the question.
As mentioned below Collapsing can be done after the output as rendered via keyboard shortcust (Alt+1, Alt+2, Alt+3)
And can rendering depth can be determined by passing an int depth param, but that does not allow to expand the results.
Is there some way to change the CSS formatting? I'm not that fluent in CSS, so this might be the solution.
Why I need this:
What I want is to make the output 'cleaner', and dive in when something of interest show's up.
I'm running a query repeatedly, and don't need all of the output all the time, but still using my human abilities to detect change, instead of coding the detection.
Update: November 2013
As Joe (the author himself!) mentions in the comments, LINQPad no longer has the limitation described.
It is now possible to state 0 and collapse the information after it's rendered.
No, although you can call Dump with a number to force it to display to that nesting depth:
.Dump(0)
You can also use the formatting shortcuts (Alt+1, Alt+2, Alt+3) to collapse the whole display to one, two or three levels.
Another option is to dump to grids. Call Dump(true) or use the toolbar button. Grids show only one level and subsequent levels are shown upon demand with hyperlinks.