Program won't start before stop the program frist? - labview

I'm a newbie using LabView for my project. So I'm developing a program that gathers data from sensors that attach in the DAQmx board and also a spectrometer from STS-VIS ocean optic. At the first developing, I combine both devices in one loop inside the same flat structure, but I got an error saying: "The application is not able to keep up with the hardware acquisition." I cannot get the data showing on the graph for both devices, but it was just fine if I run it separately. And I found the solution saying that I need to separate both devices in a different while loop process because it may have different buffer size (?). I did it and it worked that all the sensors are showing in each graph. But the weird thing is, I need to stop the program first at the first run, then run it again for the second time for getting the graph showing in the application. Can anyone tell me what I did wrong and give me a solution? Due to the project rule I cannot share my Vi here publicly, but if anyone interested to help, I'd like to share it personally. Thank you.

you are doing right thing but you have to understand how Data acquisition work in LabVIEW and hardware.
you can increase hardware buffer Programmatically using property node or try to read fast as possible then you dont need two separate loop.
NI

I work currently with a NI DAQmx device too and became desprate using LabView because there is not good documentation and/or examples. Then I started to use Python which I found more intuitively. The only disadvantage is that the userinterface is not so easily generated, but for this one can use the QT Designer (open source programm avaiable online).

Related

Postprocess Depth Image to get Skeleton using the Kinect sdk / other tools?

The short question: I am wondering if the kinect SDK / Nite can be exploited to get a depth image IN, skeleton OUT software.
The long question: I am trying to dump depth,rgb,skeleton data streams captured from a v2 Kinect into rosbags. However, to the best of my knowledge, capturing the skeleton stream on Linux with ros, kinect v2 isn't possible yet. Therefore, I was wondering if I could dump rosbags containing rgb,depth streams, and then post-process these to get the skeleton stream.
I can capture all three streams on windows using the Microsoft kinect v2 SDK, but then dumping them to rosbags, with all the metadata (camera_info, sync info etc) would be painful (correct me if I am wrong).
It's quite some time ago that I worked with NITE (and I only used Kinect v1) so maybe someone else can give a more up-to-date answer, but from what I remember, this should easily be possible.
As long as all relevant data is published via ROS topics, it is quite easy to record them with rosbag and play them back afterwards. Every node that can handle live data from the sensor will also be able to do the same work on recorded data coming from a bag file.
One issue you may encounter is that if you record kinect-data, the bag files are quickly becoming very large (several gigabytes). This can be problematic if you want to edit the file afterwards on a machine with very little RAM. If you only want to play the file or if you have enough RAM, this should not really be a problem, though.
Indeed it is possible to perform a NiTE2 skeleton tracking on any depth-image-stream.
Refer to:
https://github.com/VIML/VirtualDeviceForOpenNI2/wiki/How-to-use
and
https://github.com/VIML/VirtualDeviceForOpenNI2/wiki/About-PrimeSense-NiTE
With this extension one can add a virtual device which allows to manipulate each pixel of the depth stream. This device can then be used for creation of a userTracker object. As long as the right device name is provided skeleton tracking can be done
\OpenNI2\VirtualDevice\Kinect
but consider usage limits:
NiTE only allow to been used with "Authorized Hardware"

get available memory in winrt

I want to plot a larger amount of data within an metro application and I need to buffer this. To find out how far I can buffer it would be great to know how much memory is still available (to my app), this shouldn't inlclude virtual memory.
Is there any way in a metro app to get this Information? I only found GlobalMemoryStatusEx, but that can only be used in desktop apps
Thank you
I just had to deal with this and got hold of the right people at Microsoft to answer this. Unfortunately the answer was : No you can't do that, except use the restricted calls that you found but using those prevents you from getting certified for publication in the store.
How about just trying to allocate a lot of memory in chunks. When it first fails - add up the sizes of the chunks and either release them or use them for your operations.

Debugging methods for finding the location and error that's causing a game to freeze

I recently I came across an error that I cannot understand. The game I'm developing using Cocos2D just freezes at a certain random point -- it gets a SIGSTOP -- and I cannot find the reason. What tool can I use (and how do I use it) to find out where the error occurs and what's causing it?
Jeremy's suggestion to stop in the debugger is a good one.
There's a really quick way to investigate a freeze (or any performance issue), especially when it's not easy to reproduce. You have to have a terminal handy (so you'll need to be running in the iOS simulator or on Mac OS X, not on an iOS device).
When the hang occurs pop over to a terminal and run:
sample YourProgramName
(If there are spaces in your program name wrap that in quotes like sample "My Awesome Game".) The output of sample is a log showing where your program is spending time, and if your program is actually hung, it will be pretty obvious which functions are stuck.
I disagree with Aaron Golden's answer above as running on a device is extremely useful in order to have a real-case scenario of where the app freezes. The simulator has more memory and does not reproduce the hardware of the device in an accurate way (for example, the frame rate is in certain cases lower).
"Obviously", you need to connect your device (with a developer profile) on Xcode and look at the console terminal to look for traces that user #AaronGolden suggested.
If those are not enough you might want to enable a general exception breakpoint in Xcode to capture more of the stacktrace messages.
When I started learning Cocos2D my app often frooze. This is a list of common causes:
I wasn't using sprite sheets and hence the frame rate was dropping drammatically
I was using too much memory (too many high-definition sprites. Have a look at TexturePacker and use pvr.ccz or pvr.gz format; it cuts memory allocation in half)
Use instruments to profile your app for memory warnings (for example, look at allocation instruments and look for memory warnings).

Data usage from any application

I want to read how much data from 3G every app uses. Is this is possible in iOS 5.x ? And in iOS 4.x? My goal is for example:
Maps consumed 3 MB from your data plan
Mail consumed 420 kB from your data plan
etc, etc. Is this possible?
EDIT:
I just found app doing that: Data Man Pro
EDIT 2:
I'm starting a bounty. Extra points goes to the answer that make this clear. I know it is possible (screen from Data Man Pro) and i'm sure the solution is limited. But what is the solution and how to implement this.
These are just hints not a solution. I thought about this many times, but never really started implementing the whole thing.
first of all, you can calculate transferred bytes querying network interfaces, take a look to this SO answer for code and a nice explanation about network interfaces on iOS;
use sysctl or similar system functions to detect which apps are currently running (and for running I mean the process state is set to RUNNING, like the ps or top commands do on OSX. Never tried I just suppose this to be possible on iOS, hoping there are no problems with app running as unprivileged user) so you can deduce which apps are running and save the traffic stats for those apps. Obviously, given the possibility to have applications runnning in background it is hard to determine which app is transferring data.
It also could be possible to retrieve informations about network activity per process/app like nettop does on OSX Lion, unfortunately nettop uses the private framework NetworkStatistics.framework so you can't dig something out it's implementation;
take into account time;
My 2 cents
No, all applications in iOS are sandboxed, meaning you cannot access anything outside of the application. I do not believe this is possible. Neither do I believe data-traffic is saved on this level on the device, hence apple would have implemented it in either the network page or the usage page in Settings.app.
Besides that, not everybody has a "data-plan". E.g. in Sweden its common that data-traffic is free of charge without limit in either size or speed.

starting a microcontroller simulator/emulator

I would like to create/start a simulator for the following microcontroller board: http://www.sparkfun.com/commerce/product_info.php?products_id=707#
The firmware is written in assembly so I'm looking for some pointers on how one would go about simulating the inputs that the hardware would receive and then the simulator would respond to the outputs from the firmware. (which would also require running the firmware in the simulated environment).
Any pointers on how to start?
Thanks
Chris
Writing a whole emulator is going to be a real challenge. I've attempted to write an ARM emulator before, and let me tell you, it's not a small project. You're going to either have to emulate the entire CPU core, or find one that's already written.
You'll also need to figure out how all the IO works. There may be docs from sparkfun about that board, but you'll need to write a memory manager if it uses MMIO, etc.
The concept of an emulator isn't that far away from an interpreter, really. You need to interpret the firmware code, and basically follow along with the instructions.
I would recommend a good interactive debugger instead of tackling an emulator. The chances of destroying the hardware is low, but really, would you rather buy a new board or spend 9 months writing something that won't implement the entire system?
It's likely that the PIC 18F2520 already has an emulator core written for it, but you'll need to delve into all the hardware specs to see how all the IO is mapped still. If you're feeling up to it, it would be a good project, but I would consider just using a remote debugger instead.
You'll have to write a PIC simulator and then emulate the IO functionality of the ports.
To be honest, it looks like its designed as a dev kit - I wouldn't worry about your code destroying the device if you take care. Unless this a runner-up for an enterprise package, I would seriously question the ROI on writing a sim.
Is there a particular reason to make an emulator/simulator, vs. just using the real thing?
The board is inexpensive; Microchip now has the RealICE debugger which is quite a bit more responsive than the old ICD2 "hockey puck".
Microchip's MPLAB already has a built-in simulator. It won't simulate the whole board for you, but it will handle the 18F2520. You can sort of use input test vectors & log output files, I've done this before with a different Microchip IC and it was doable but kinda cumbersome. I would suggest you take the unit-testing approach and modularize the way you do things; figure out your test inputs and expected outputs for a manageable piece of the system.
It's likely that the PIC 18F2520 already has an emulator core written for it,
An open source, cross-platform simulator for microchip/PICs is available under the name of "gpsim".
It's extremely unlikely that a bug in your code could damage the physical circuitry. If that's possible, then it is either a bug in the board design or it should be very clearly documented.
If I may offer you a suggestion from many years of experience working with these devices: don't program them in assembly. You will go insane. Use C or BASIC or some higher-level language. Microchip produces a C compiler for most of their chips (dunno about this one), and other companies produce them as well.
If you insist on using an emulator, I'm pretty sure Microchip makes an emulator for nearly every one of their microcontrollers (at least one from each product line, which would probably be good enough). These emulators are not always cheap, and I'm unsure of their ability to accept complex external input.
If you still want to try writing your own, I think you'll find that emulating the PIC itself will be fairly straightforward -- the format of all the opcodes is well documented, as is the memory architecture, etc. It's going to be emulating the other devices on the board and the interconnections between them that will kill you. You might want to look into coding the interconnections between the components using a VHDL tool that will allow you to create custom simulations for the different components.
Isn't this a hardware-in-the-loop simulator problem? (e.g. http://www.embedded.com/15201692 )