I'm a newbie using LabView for my project. So I'm developing a program that gathers data from sensors that attach in the DAQmx board and also a spectrometer from STS-VIS ocean optic. At the first developing, I combine both devices in one loop inside the same flat structure, but I got an error saying: "The application is not able to keep up with the hardware acquisition." I cannot get the data showing on the graph for both devices, but it was just fine if I run it separately. And I found the solution saying that I need to separate both devices in a different while loop process because it may have different buffer size (?). I did it and it worked that all the sensors are showing in each graph. But the weird thing is, I need to stop the program first at the first run, then run it again for the second time for getting the graph showing in the application. Can anyone tell me what I did wrong and give me a solution? Due to the project rule I cannot share my Vi here publicly, but if anyone interested to help, I'd like to share it personally. Thank you.
you are doing right thing but you have to understand how Data acquisition work in LabVIEW and hardware.
you can increase hardware buffer Programmatically using property node or try to read fast as possible then you dont need two separate loop.
NI
I work currently with a NI DAQmx device too and became desprate using LabView because there is not good documentation and/or examples. Then I started to use Python which I found more intuitively. The only disadvantage is that the userinterface is not so easily generated, but for this one can use the QT Designer (open source programm avaiable online).
I am running long simulations using project_flow and SUMO. but i don't need the GUI to refresh every step of the simulation. is there any way to
de-couple the simulation step and the gui refresh rate?
I do not know how this works in the context of flow but in general you can simply use the sumo executable instead of sumo-gui to have the same simulation without a GUI. Modifying the refresh rate is not possible, as far as I know, but minimizing the window should at least avoid all draw requests from the system.
I want to use callgrind to profile my program, but it is slowed down too much. What I want to do is generate a callgraph using kcachegrind where every node shows how much percentage the program spent in which function. Can you tell me which features I can safely disable for better performance so this info is still generated?
Thanks a lot!
Quick Overview
Callgrind is essentially a cache profiler (both instruction and data) that works at function-level granularity in order to reproduce the call graph. The profiler observes actions that trigger events during program execution and updates various aggregate counters maintained by the simulator.
However, this fine-grained simulation of cache events comes at a heavy cost of program runtime. You should know that even with all profiling turned off and no useful data being collected, Callgrind will still have a minimum of about 2-4x hit in runtime. When actively collecting data, it would be an average of 10-20x slower.
Is this theoretical minimum acceptable for your requirement? If not, you should consider other profiling options - discussed here. But if, with some careful control, speeding up large, uninteresting chunks of your program to only a 2-4x slowdown sounds reasonable, read on!
Available Hooks
Callgrind offers 2 forms of control over the collection of profiling data. It's important to understand their inter-dependencies in order to make an informed choice:
Intrumentation state - When disabled, no program actions are observed and thus, no events are triggered or collected. The simulator basically switches to an 'idle' state; this is what helps you achieve the theoretical 2-4x minimum I mentioned above (see Nulgrind).
But be warned, this should be used carefully! While it offers attractive benefits, this can have non-trivial effects on accuracy. From the documentation:
However, this only should be used with care and in a coarse fashion: every mode change resets the simulator state (ie. whether a memory block is cached or not) and flushes Valgrinds internal cache of instrumented code blocks, resulting in latency penalty at switching time.
Collection state - When disabled, the aggregate counters are not updated with triggered events. This provides a way to streamline collected data to only the interesting parts of your call stack.
However, intuitively, this does not offer any noticeable speedup in execution time. And of course, instrumentation needs to be switched on for collection to be enabled.
Commands
valgrind --tool=callgrind
--instr-atstart=<yes|no> ;; default = yes
--collect-atstart=<yes|no> ;; default = yes
--toggle-collect=<function> ;; Toggle collection at entry/exit of specific function
<PROGRAM> <PROGRAM_OPTIONS>
Instrumentation - Turning this off in the beginning indicates you have to turn it back on again at the appropriate time. 2 alternate ways to do this:
During program execution, use the following command from the shell at the appropriate time.
callgrind_control -i <on|off>
This would require visibility into your program execution as well as some tolerance in accuracy due to the latency of deploying the command. You could use a few shell tricks to help, of course.
Insert the following macros into your program code and recompile your binary.
CALLGRIND_START_INSTRUMENTATION;
CALLGRIND_STOP_INSTRUMENTATION;
Collection - Similarly, if disabled at the start, collection needs to be toggled around the interesting parts of the code. 2 alternate ways to do this:
Use the --toggle-collect=<function> flag during launch. By definition, this would be inclusive of all the sub-calls within this function. If you can thus identify a particular parent function as your bottleneck, this can be a useful method to isolate relevant data and keep the generated call graph minimal.
Tip: Wildcards are supported in the function name!
Use the following macro before and after the relevant portion of your program code and recompile your binary. This can give you more fine-grained control within functions.
CALLGRIND_TOGGLE_COLLECT;
Summary
To combine all the ideas above, a good approach would be:
#include <callgrind.h>
// Uninteresting program chunk
CALLGRIND_START_INSTRUMENTATION;
// A few extra lines to allow cache warm-up
CALLGRIND_TOGGLE_COLLECT;
// Portion to profile
CALLGRIND_TOGGLE_COLLECT;
CALLGRIND_DUMP_STATS;
CALLGRIND_STOP_INSTRUMENTATION;
// Rest of the program
Recompile, and launch Callgrind with:
valgrind --tool=callgrind --instr-atstart=no --collect-atstart=no <PROGRAM> <PROGRAM_OPTIONS>
Note that there will be 2 Callgrind output files generated by this method - the first created by the DUMP_STATS macro, and the second at program exit. DUMP_STATS zeroes all counters after use, which means the second log will report 0 events.
Within the active instrumentation block, you could also toggle collection multiple times and dump collected stats for each chunk.
I'm working on an arcade cabinet that will be able to play various video game consoles (real hardware, not emulated.) There will be a PC inside to run a selection menu. I'll have to write that myself. I'll also need program a PLC which will do various things like control the relays which switch audio/video/controls between the PC and the various consoles, etc. I'll need help with those two tasks in time, but they are not what I'm working on right now.
What I'm working on as a starting point has to do with the controller encoding. Basically, the controls for each player consist of a few buttons and a joystick. These use momentary, normally-open contact switches, one for each button, and one for each cardinal direction on the joystick. Pressing the button, or joystick direction, closes the switch. The state of the buttons is then communicated to the console by an encoder.
The encoder has a connection for each button and joystick direction which is connected to 5 volts ("high") through a pull-up resistor. When a button or direction is pressed, a connection to ground is made through the momentary switch. When the encoder reads ground ("low") on a button connection, it knows that a button has been pressed and it communicates this to the console.
I already have all this working with the various consoles, but I've thought of some features that would be nice to add. This is where my current task comes in.
The first feature is button remapping. Some of these games were designed with controllers in mind, so when you use them with an arcade control panel, some of the buttons may not be where you want them. Some games allow buttons to be remapped via software, but others do not. My idea is to add a PLC in between the joystick and buttons and the encoder. I'll call this PLC a "pre-encoder."
The pre-encoder would read the states of the buttons on some input pins, then write these states back to some output pins, relaying them to the encoder. The advantage is that its programming could associate any input pin with any output pin, effectively remapping the buttons. Whenever a console is selected via the computer's menu, a button-mapping profile associated with a particular game could be selected as well, and forwarded to the pre-encoder.
Of course, the pre-encoder's routine which reads the buttons and relays their states to the encoder must repeat very quickly for smooth control. These games will be running at about 50 to 60Hz, meaning a new a video frame every 16.67ms or less. Ideally, the pre-encoder will be able to repeat this routine many, many times per frame to ensure the absolute minimum input lag. I want to ensure that the code and hardware selection is optimized to run as fast as possible.
The second feature is turbo buttons. Some games, especially arcade games, require a fire button to be pressed repeatedly every time you want to fire your gun, or your ship's cannons, etc, even if you have unlimited ammo. This seems unnecessary, and it will tire your fingers out pretty quickly. A turbo button is one that can be held down continuously, yet the game is being told that you are rapidly pressing and releasing it. This could be done in software for anything running on the PC, or with an analog solution like a 555 timer, but the best method is to synchronize the turbo button timing with the video refresh rate. By feeding the vertical sync pulse from the PC or video game console's video output to a PLC, it will know exactly how often a frame of video is rendered. Turbo button timing can then be controlled by defining, in numbers of frames, the periods when the button should be pressed and released. Timing information could also be included with the game-specific button profiles.
The third feature is slow buttons. Actually, this would probably only be applied to the joystick, but I'm referring to the switches for its cardinal directions as buttons. In certain games (it will probably only be used in shmups) it is sometimes needed to move your character (ship/plane) through very tight spaces. If movement is too fast in response to even minimal joystick input, you may go too far and crash. The idea is that, while a slow activation button is held, the joystick will be made less responsive by rapidly activating and deactivating it in the same manner as the turbo buttons.
I'm not sure if I want the pre-encoder itself to be watching the vertical sync pulse or if it will slow it down too much. My current thinking is that a seperate PLC will be responsible for general management of the cab itself; watching the "on" button, switching relays, communicating directly with the PC, watching the vertical sync pulse, etc. This will free up the pre-encoder to run more quickly.
Here is some example "code" for the pre-encoder. Obviously, it's just a rough outline of what I have in mind, as I don't even know what language it will be. This example assumes that a dedicated PLC will be used just as the pre-encoder. A separate PLC will be responsible for watching the vertical sync pulse, in addition to other tasks, like getting a game profile from the computer and passing some of that info to the pre-encoder. That PLC will know what the frame timing should be for turbo and slow functions, it will count frames, and during frames when turbo buttons should be disabled, it outputs high to a pin on the pre-encoder PCB, letting it know to disable turbo buttons. During frames when it should be enabled, it outputs low to that pin. Same idea with the slow buttons. There is also a pin which the pre-encoder checks at the end of its routine, so it can be told to stop and await a different game profile.
get info from other PLC (which got it from the computer, from a user-selected game profile):
array containing list of turbo buttons (buttons are identified by what input pin they are connected to)
array containing list of slow buttons (will probably only be the joystick directions, if any)
array containing list of slow activation buttons (should normally be only one button, if any)
array containing list of normal buttons (not turbo or slow)
array containing which output pin to use for each button (this determines remapping)
Begin Loop
if turbo pin is high
for each turbo button
output pin = high
next
else
for each turbo button
output pin = input pin
next
end if
if slow pin is high and slow activation button is pressed
for each slow button
output pin = high
next
else
for each slow button
output pin = input pin
next
end if
for each normal button
output pin = input pin
next
Restart Loop unless stop pin is low
If you've read all this, thank you for your time. So (finally), here are my questions:
What are your overall thoughts; on my idea in general, feasibility, etc.?
What kind of PLC should I use for the pre-encoder? I was originally thinking of trying an Arduino, but my reading indicates that it will be much too slow, due to its use of high-level programming libraries. I don't have a problem building my own board around another PLC.
What language should I use to program the PLC? I don't mind learning a new language. There's no time limit on this project, and I'll put it in whatever it takes to get the pre-encoder running as fast as possible.
What will I need to flash my program onto the PLC?
At run-time, how should these PLC's communicate with each other, and with the PC?
Am I asking in the right place; right forum, right section, etc.? Anywhere else I should ask?
Awaiting your response eagerly,
-Rob
I have some thoughts that might be useful to you:
What are your overall thoughts; on my idea in general, feasibility, etc.?
This project sounds like you want to cheat at Defender, like I used to do with a 555 timer chip in my Atari joystick when I was a kid.
The project is feasible but you will need a pretty fast PLC.
You might spend a lot of time making this work, like a quest.
What kind of PLC should I use for the pre-encoder? I was originally thinking of trying an Arduino, but my reading indicates that it will be much too slow, due to its use of high-level programming libraries. I don't have a problem building my own board around another PLC.
As I thought of what PLC might be fast enough, a few things came to mind.
If you use a PLC that has a task architecture, you can use an event to trigger a task on the v-sync pulse, and another event to trigger on console activity. If you use a PLC without a task architecture, the user might recognize the variable latency that will occur as the program scan moves in and out of phase with the v-sync and the activity in the game. This might not be true if the PLC is fast enough, say 1ms scan time.
Most inexpensive PLCs are never going to make it. The overhead and performance will keep most PLCs around 5-10ms per scan. However, a PC-based PLC might work well. So maybe a Beckhoff controller will work nicely. If you use something like a CX2000, it has Windows 7, USB, DVI for the user interface, and an Ethercat bus on the side to attach physical I/O cards for the controller and console connections. See about the software below. There are many non-PC-based PLCs that would work fine, but these will likely be expensive and harder to integrate.
The Arduino solution should work if you are using a fast enough model. But your development time will be higher because it doesn't come with anything but a blank screen and a bunch of libraries. Troubleshooting is much more of a pain-in-the-neck than PLCs that really shine. You'll need to plan carefully to get the Arduino to work. Also, hardware interfacing a microcontroller is harder and you'll have to manage debouncing the switches in your code. Every PLC has filtering in its inputs, and the variety of I/O makes design easy. But, the Arduino or other microcontroller is really the choice if money is an issue. A fast PLC can be real expensive ($800 to $20k, think around $1500). If you are going to build more than a few systems, the Arduino might be better.
What language should I use to program the PLC? I don't mind learning a new language. There's no time limit on this project, and I'll put it in whatever it takes to get the pre-encoder running as fast as possible.
IEC61131 is a standard for PLC programming languages. In the USA most PLCs are programmed in ladder logic because it is really easy to learn and quicker to troubleshoot and maintain in machinery. Structured text has its advantages too, particularly in performance. It looks like some amalgamation of basic/C/Java, easy to learn and looks almost like your pseudocode example. As for your project, I think it could be programmed in either language. I would never use the other IEC61131 languages for this task.
Beckhoff TwinCAT3 uses MS Visual Studio as the IDE, where you can write both the selection menu (in VB/C++/C#) and the PLC code (in IEC61131) in the same project. The runtime license for TwinCAT (on the CX2000 unit) runs in kernel mode, providing processing performance to Windows 7 whenever it is not doing something else more important. I've used a few CX1020 models and they were great performers. The scan times were around 5ms with a significant amount of code. Faster units will scan <1ms.
What will I need to flash my program onto the PLC?
PLCs don't "flash" like microcontrollers. Whatever software you use to write the software will have a way to connect to the controller. The term "go online" makes the connection. The terms "download" and "upload" refer to transferring the program between the development computer and the PLC. The term "online edit" refers to making code changes while the PLC is executing the code. When modern PLCs are powered down, they use a battery to copy program and user RAM to flash. When they power up, they copy the flash back to RAM. To make a connection to any modern PLC, you will use a USB or Ethernet cable.
At run-time, how should these PLC's communicate with each other, and with the PC?
You plan more than one PLC? A PLC connection to a PC is a complicated subject. The term "OPC Server" refers to some [expensive] software that lets your custom Windows PC application access memory in PLCs. The Beckhoff solution glues all that together nicely without buying more stuff. PLC to PLC communication is easier. The method is usually by ethernet and varies widely as to the details.
Am I asking in the right place; right forum, right section, etc.? Anywhere else I should ask?
Sure, there is some PLC activity on this forum, which appears to tend toward hardcore PC/Web/Mobile development. I come here for awesomely intelligent answers to my deeper software questions.
You could try plctalk.net, a forum that is a little more geared toward nuts-and-bolts engineers and service techs with wild connectivity and compatibility questions related to machinery and automation. You might get some blank stares about vertical sync pulses. Their skill sets revolve around an industrial paradigm, where reliability is probably their highest calling.
You might also ask questions about performance on an Arduino or Microchip/Atmel/ARM forums. If you tell them that a PLC is faster than their hardware, that will rile them up real good! They might tell you that you can get microsecond performance numbers, which you can if you are using hardware interrupts and lots of physical circuitry to make that a reality, and you are able to cope with the sleepless nights of troubleshooting.
-Dennis
I have a program that takes about 1 second to run and takes a file as input and produces another file as output. Problem is I have to be able to process about 30 files a second. The files to process will be available as a queue (implemented over memcached) and don't have to be processed exactly in order, so basically an instance of the program checks out a file to process and does so. I could use a process manager that automatically launches instances of the program when system resources are available.
At the simple end, "system resources" will simply mean "up to two processes at a time," but if I move to a different machine make this could be 2 or 10 or 100 or whatever. I could use a utility to handle this, at least. And at the complex end, I would like to bring up another process whenever CPU is available since these machines will be dedicated. CPU time seems to be the constraining resource - the program isn't memory intensive.
What tool can accomplish this sort of process management?
Storm - Without knowing more details, I would suggest Backtype Storm. But it would probably mean a total rewrite of your current code. :-)
More details at Tutorial, but it basically takes tuples of work and distributed them through a topology of worker nodes. A "spout" emits work into the topology and a "'bolt" is a step/task in the graph where some bit of work takes place. When a bolt finish it's work, it emits same/new tuple back into the topology. Bolts can do work in parallel or series.