I have a snippet of LabVIEW code. The issue is that I am working on a grander project, and therefore have discarded the use of LabVIEW for this project. Would there be a way of how I can decipher what this program in LabVIEW is doing without having to learn all of LabVIEW? I just need to know the final voltage being applied and current. I have worked out that the program is outputting a pulse sequence that has 20 Hz and a 200 microsecond pulse duration. Would there be a way I can simulate this type of program on LabVIEW, instead of learning what each component does?
What would help - exactly VI, or code snippet from LabVIEW (not code screenshot). Then it would be possible to run the code, and save generated signal array to file. As per now, please, check functions below, and there are the links to their description:
http://zone.ni.com/reference/en-XX/help/371361P-01/lvanls/signal_generator_duration/
http://zone.ni.com/reference/en-XX/help/371361N-01/lvanls/ramp_pattern/
Related
I'm a newbie using LabView for my project. So I'm developing a program that gathers data from sensors that attach in the DAQmx board and also a spectrometer from STS-VIS ocean optic. At the first developing, I combine both devices in one loop inside the same flat structure, but I got an error saying: "The application is not able to keep up with the hardware acquisition." I cannot get the data showing on the graph for both devices, but it was just fine if I run it separately. And I found the solution saying that I need to separate both devices in a different while loop process because it may have different buffer size (?). I did it and it worked that all the sensors are showing in each graph. But the weird thing is, I need to stop the program first at the first run, then run it again for the second time for getting the graph showing in the application. Can anyone tell me what I did wrong and give me a solution? Due to the project rule I cannot share my Vi here publicly, but if anyone interested to help, I'd like to share it personally. Thank you.
you are doing right thing but you have to understand how Data acquisition work in LabVIEW and hardware.
you can increase hardware buffer Programmatically using property node or try to read fast as possible then you dont need two separate loop.
NI
I work currently with a NI DAQmx device too and became desprate using LabView because there is not good documentation and/or examples. Then I started to use Python which I found more intuitively. The only disadvantage is that the userinterface is not so easily generated, but for this one can use the QT Designer (open source programm avaiable online).
I have an Arduino file (.ino) that i want to modify.
I thought about 2 ways in which i can do this:
1.I make a program in vb net that sends serial signals and based on that signals my arduino board is programmed to change a certain variabile.This seems like a good idea but after you reset the arduino board the variable value is lost
2.I change the arduino code immediately before i compile it and upload it the the board, this is what i do now but it gets kinda tricky because you have to modify a lot of files and the encodings change and so on.
How would you solve this problem?
As others have said here, you should modify your code to persist the changes to EEPROM, either the built in one or in the alternative to a replaceable external EEPROM like "FRAM" that is available cheaply ($6.00) from many suppliers.
At an average of 100,00 writes lifetime you should be OK with the internal EEPROM unless you are changing the slider too often. Do the math:
100 changes daily would last approximately 2 to 2.5 years of use. If this is not sufficient for you then get an external FRAM pack which will last an exponentially longer time.
I am using Mindstorms and build a Robot with two Motors and a IR Sensor.
1) I made a program which lets the Robot follow a IR signal and stops when reaching it.
2) I made a program to remote control the robot with the IR control.
Both program work. But when combining them, program 1 does not work anymore.
It gives eratic results from the IR sensor. It seams the detecting a IR-Button is not compatible with measuring the signal in the same program. Anyone has similar experiance or know how to deal with it?
This is the program which works:
Introducing another Selection around it which senses a IR Button does not work anymore:
The result is, that the program follows to the right section, but the IR measurements of distance and directions give random results.
Anyone has any Idea?
If you have already tried another sensor and there is still a problem it could possibly be a bug with the software. I would post your example on the the NI MINDSTORMS support board so that they can look into the bug.
http://forums.ni.com/t5/LabVIEW-for-LEGO-MINDSTORMS-and/bd-p/460
I'm using a USB-6356 DAQ board to control an IC via SPI.
I'm using parts of the NI SPI Digital Waveform library to create the digital waveform, then a small wrapper VI to transmit the code.
My IC measures temperature on an RTD, and currently the controlling VI has a 'push for single measurement' style button.
When I push it, the temperature is returned by a series of other VIs running the SPI communication.
After some number of pushes (clicking the button very quickly makes this happen more quickly in time, but not necessarily in number of clicks), the VI generates an error -200361, which is nominally FIFO buffer overflow on the DAQ board.
It's unclear to me if that could actually be the cause of the problem, but I don't think so...
An NI guide describing this error for USB-600{0,8,9} devices looks promising, but following the suggestions didn't help me. I substituted 'DI.UsbXferReqCount' for the analog equivalent, since my read task is digital. Reading the default returned 4, so I changed the property to write and selected '1', but this made no difference.
I tried uninstalling the DAQ board using the Device Manager, unplugging and replugging, but this also didn't change anything.
My guess is that additional clock samples are generated after the end of the 'Finite Samples' part for the Read and Write tasks, and that these might be adding blank data that overflows, but the temperatures returned don't indicate strange data, and I'd have assumed that if this were the case, my VIs would be unable to interpret the data read in as the correct temperature.
I've attached an image of the block diagram for the Transmit VI I'm using, but actually getting it to run would require an entire library of VIs.
The controlling VI is attached to a nearly identical forum post at NI forums.
I think USB-6356 don't have output buffers used for Digital signal. You can try it by NI-MAX, if you select the digital output, you may find that there is no parameters for Samples. It's only output a bool-value(0 or 1) in one time.
You can also use DAQ Assistant in LabVIEW, when you config Digital output, if you select N-Samples or Continuous samples, then push OK button, here comes a Dialog that tell you there is no buffer for lines that you selected.
having trouble understanding the exact role of an interpreter. to quote wikipedia - "Programs in interpreted languages[1] are not translated into machine code however, although their interpreter (which may be seen as an executor or processor) typically consists of directly executable machine code (generated from assembly and/or high level language source code)."
my doubt is about this statement - "interpreter (which may be seen as an executor or processor) typically consists of directly executable machine code" ? what does that mean? interpreter is supposed to be a program .How can it 'execute' code by itself ? they have re-stated this fact by saying " interpreter is different from language translators like compilers". Can anyone clarify please ? Also what is the difference (if any) between interpreted language and machine code ?
Compiler:
Transforms your code into binary machine code which can be directly executed by the CPU. Example: C, Fortran
Interpreter:
Is a program that executes the code written by the programmer without an additional step of transformation. Example: Bash scripts, Formulas in Excel
Actually it is not that easy any more. There are many concepts between these two pols. Java is compiled into an intermediate language that is then interpreted, just-in-time compilers compile small parts of interpreted code to speed them up.
"How can it 'execute' code by itself?" Take the Excel example. If you type a calculation into a cell, Excel somehow executes the code, right? But Excel does not compile the code and run it, but it parses it and executes in a general way. Excel has a sum function that in the end is executed on the processor as an add machine command, but there is a lot to do for Excel in between.
I will briefly describe an emulator to explain the main concept mentioned in the question.
Suppose I am using Mame, a video game emulator, and select the old classic arcade "Miss PacMan". Looking at the schematic or looking directly at a PCB inside an arcade video game, it is easy to find the processor : the zilog Z80, the only large chip with 40 pins. Now, if we get the technical data for that processor, we can find the binary encoding for each instruction it can execute. Basically, it get a 8-bit data (value ranging from 0 to 255) which tells the processor what to do. In the case of the emulator, it read the byte (the exact same bytes as would do the Z80 processor inside the original miss pac-man electronic board), determine what a Z80 would do and simulate the instruction.
Some classic video game may have use a x86 processor, similar to the one currently used in most PC. Even when selecting such a game in Mame, the emulator would still read the bytes as found in that game and interpret each one the way the x86 processor would do. In other words, the emulator would not take advantage of the fact that the PC and the emulated game are using a similar processor. It would perform the same steps to emulate any game no matter if the PC on which Mame is running share any similitude with the original game.
You are asking how an interpreter could execute code? The interpreter is a program (the interpreter is just a software, not a physical processor). The wording is effectively confusing. For this sentence to make sense, we would need all the following conditions:
1 - the program to interpret is already in binary, in a machine language that can be executed directly by the processor used in your PC
2 - the program location, the exact address used, is the same as the location that you can reserve in your PC
3 - any library and any I/O occupy the exact same address
When all these condition can be meet, the interpreter could just tell the processor on your PC to stop executing the code from the interpreter but instead, "jump" in the code of the program to be interpreted. Anyone could then say : it is not an interpreter, it is just a launcher.
Maybe such an interpreter which actually does not interpret but let your processor do the real job is still useful in the following way: it could let your processor perform some of the work, but request the generation of an exception when the code to be interpreted is executing some type of instruction. For example, let the code running, but generate a "general protection error" or "trap" or "exception" when trying to execute any of the variant of "IN" or "OUT". The interpreter would take note of the I/O port being written or it would choose a value to give instead of allowing to read a real I/O port. The interpreter would then manage to get the processor "jump" in the program to interpret at the location just after the instruction "IN" or "OUT".
Normally, an interpreter read an ASCII text file, the original source code (which could be Unicode instead of ASCII), determine line by line, word by word, what a compiler would do, then simulate the task on the fly. When the original compiler would need to read many lines to fully understand the current task, the interpreter would also need to read all these lines before being able to simulate the same task.
A big advantage of an interpreter is that it can not crash. Because every instruction is simulated, it is not sensitive to any bug or malicious code. That was a big advantage at the time when computers needed to reboot after encountering any bug, at a time where reboot was taking 10 minutes or more.
Today, with fast SSD to reboot in 5 second and with reliable operating systems which can trap any error in one process and close that process without affecting the stability of the machine, there is less incentive to prefer a slow interpreter over a much faster JIT or much much faster binary executable