Multi-threaded Scientific data visualization in c++ - data-visualization

I have a process which generates a data vector from a sensor.I'm using Intel Integrated Performance Primitives v5.3 update 3 for Windows on IA-32 to process it further for some calculations.I want to know if there is any c++ library which allows to plot the vector as a histogram/bar chart during data acquisition.I can write the multi-threaded code,but need information on availability of plotting functions in C++. This thing is pretty simple in MATLAB ,but I want to do it using c++ .
Suggestions are welcome !!

You can try one of these:
CBarChart
Scientific charting control
High-speed Charting Control
VTK
ChartDirector
Charting Library
GDCHART
Carnac Chart Library

As mentioned above, VTK is an open source C++ viz library. And more importantly, it is by-design parallelized. It will try and use whatever hardware you can give it.

MathGL plotting functions can be executed in separate thread and can be parallelized by user-side.

I use OpenInventor for scientific visualisation. It may be (in some ways) a relic of the old SGI days, but it is still being supported and works well. Regarding graphing and other scientific visualisation, look at MeshViz and other extensions from Mercury:
Mercury (formerly TGS) OpenInventor or Coin, a dual-licensed (GPL + commercial) alternative
MeshViz extension
It has charting, vector visualisation, etc. It's quite comprehensive.
It's not free, but they do trial licenses so you can determine if it suits your needs.

Related

Can Labview simulate the electrical circuits like Pspice?

I really need to know that can I simulate the simple circuits like RC, OP-AMP, RL, or RLC in labview?
Or it can only make processes for recorded signals?
I think, it cannot create and simulate the electrical circuits like pscpice or simulink.
Please help me if you know it cannot 100% work.
LabVIEW is a tool for graphical programming.
Of course, you can (try to) build up a simulation tool with it, similiar to what you would do in C, but you'd have to do it completely by hand, as the authors of the several *Spice did as well.
It definitely cannot simulate anything Spice-like. It is as if you would ask if you could do compiling from within Excel, or do database management with Paint.
LabVIEW cannot simulate circuits.
If you are looking for general-purpose software that can capture and simulate electronic circuits, LabVIEW is not suitable.
If you want to simulate specific, simple circuits like the ones you list, for which you can easily define the set of equations that determine their behaviour, you should be able to do it in LabVIEW, using the mathematics functions for example. This would make most sense if you wanted to link your simulation with some other function of LabVIEW such as hardware input/output, graphing, data analysis etc.
Sorry, but your circuit simulator is in another castle!
I would recommend another tool (from National Instruments) called Multisim.
http://www.ni.com/multisim/

Can I do this kind of Code Vectorization on iOS, and what are the alternatives?

I came across an interesting blog post talking about some kind of superb technique to speed up processing by "vectorizing code". It's very scientific.
He's using something called SSE2 and also talks about SPU, and now I'm curious how this can be brought to digital signal processing on the iPhone.
Although this seems to be something I must deal with in the future, I wonder what the alternatives are. Some people told me it is possible to perform massive-parallel calculations on the GPU.
What options do we have to speed things up like this or even better? What frameworks and technologies are available?
The ARM CPUs on newer iOS devices have Neon SIMD, which is somewhat similar to SSE on x86 or AltiVec on PowerPC.
You might want to look at Apple's Accelerate framework which started out on Mac OS X but which is now also available on iOS 4.0 and later - this contains of a lot of useful routines which have been vectorized.
Alternatively you can try writing your own Neon SIMD routines, although this is not for the faint-hearted.

OpenKinect Maturity

I'm interested in writing some homebrew code for the Microsoft Kinect console. I have a few applications which I think would translate well to the platform. I've been toying with the idea of giving it a shot using the OpenKinect drivers and libraries. Obviously this would be a lot of work, but I am wondering just how much. Does anyone have experience with OpenKinect? Do you get only the raw video/audio data from the device, or has anyone written higher level abstractions to make common tasks easier?
The OpenKinect library is basically a driver — at least for now — so don't expect much high functions from it. You will more or less get the raw data from both the depth and the video cameras.
This is basically an array received in a callback function each time a frame arrives.
You can give it a try by following the instructions provided on the OpenKinect website, it's really quick to install and try it, and you can play a bit with the glview application provided to get a feeling of what's possible.
I've set up a few demos using opencv, and got pretty cool results even though I didn't have much background in computer vision so I can only encourage you to try it yourself!
Alternately, if you're looking for more advanced functions, the OpenNI framework was just released this week and provides some impressive high level algorithms such as skeleton tracking and some gesture recognition. Part of the framework is proprietary algorithms from PrimeSense (like the powerful skeleton tracking module...). I haven't tried it yet and don't know how well it integrates with the kinect and the different OS, but since a bunch of guys from different groups (OpenKinect, Willow Garage...) are working hard on it that shouldn't be an issue within a week.
Elaborating further on what Jules Olleon wrote, i've worked with OpenNI (http://www.openni.org) and the algorithms above it (NITE), and I highly recommend using these frameworks. Both frameworks are well-documented, and come with numerous samples from which you can start out.
Basically, OpenNI abstracts the lower-level details of working with the sensor and its driver for you, and gives you a convenient way to get what you want from a "generator" (e.g. xn::DepthGenerator for getting the raw depth data). OpenNI is open-source and free to use in any application. OpenNI also handles the platform-abstraction for you. As of today, OpenNI is supported and works fine for Windows 32/64 and linux, and is in the process of being ported to OSX. Bindings are available for use in multiple programming languages (C, C++, .NET, Python, and a few others I believe).
NITE has additional interfaces built above OpenNI, which give you higher-level results (e.g. track a hand-point, skeletons, scene analysis etc). You'll want to check the subtleties of NITE's license regarding when/where you can use it, but it's still probably the easiest and fastest way to get analysis (e.g. skeleton) for now. NITE is closed-source, so PrimeSense need to supply a binary version for you to use. Currently windows and linux versions are available.
I haven't worked with with OpenKinect but I've been working with OpenNI and SensorKinect for a few months now for my research. If you are planning to work with raw data from Kinect, they work great in giving you depth and video (they don't support motor control). I've used it with C++ and OpenGL in both Windows 64bit and Ubuntu 32bit with almost no modifications to the code. It's very easy to learn if you know basic c++. Installing it might be a little headache.
For more advanced features such as skeleton detection, gesture recognition, etc., I highly recommend using the middlewares such as NITE with OpenNI or the ones provided in here: Middlewares developed around OpenNI rather than re-inventing the wheel. Nite is also very easy to use once you have OpenNI working; e.g. joint recognition is something around 10-20 extra lines of code.
Something that I would recommend to my younger self would be to learn and work with a basic game engine (e.g. Unity) rather than directly with OpenGL. It would give you a lot better and more enjoyable graphics, less hassle and would also enable you to easily integrate your program with other tools such as PhysX. I haven't tried any, but I know there are some plugins for using Kinect drivers in Unity.

How Do You Profile & Optimize CUDA Kernels?

I am somewhat familiar with the CUDA visual profiler and the occupancy spreadsheet, although I am probably not leveraging them as well as I could. Profiling & optimizing CUDA code is not like profiling & optimizing code that runs on a CPU. So I am hoping to learn from your experiences about how to get the most out of my code.
There was a post recently looking for the fastest possible code to identify self numbers, and I provided a CUDA implementation. I'm not satisfied that this code is as fast as it can be, but I'm at a loss as to figure out both what the right questions are and what tool I can get the answers from.
How do you identify ways to make your CUDA kernels perform faster?
If you're developing on Linux then the CUDA Visual Profiler gives you a whole load of information, knowing what to do with it can be a little tricky. On Windows you can also use the CUDA Visual Profiler, or (on Vista/7/2008) you can use Nexus which integrates nicely with Visual Studio and gives you combined host and GPU profile information.
Once you've got the data, you need to know how to interpret it. The Advanced CUDA C presentation from GTC has some useful tips. The main things to look out for are:
Optimal memory accesses: you need to know what you expect your code to do and then look for exceptions. So if you are always loading floats, and each thread loads a different float from an array, then you would expect to see only 64-byte loads (on current h/w). Any other loads are inefficient. The profiling information will probably improve in future h/w.
Minimise serialization: the "warp serialize" counter indicates that you have shared memory bank conflicts or constant serialization, the presentation goes into more detail and what to do about this as does the SDK (e.g. the reduction sample)
Overlap I/O and compute: this is where Nexus really shines (you can get the same info manually using cudaEvents), if you have a large amount of data transfer you want to overlap the compute and the I/O
Execution configuration: the occupancy calculator can help with this, but simple methods like commenting the compute to measure expected vs. measured bandwidth is really useful (and vice versa for compute throughput)
This is just a start, check out the GTC presentation and the other webinars on the NVIDIA website.
If you are using Windows... Check Nexus:
http://developer.nvidia.com/object/nexus.html
The CUDA profiler is rather crude and doesn't provide a lot of useful information. The only way to seriously micro-optimize your code (assuming you have already chosen the best possible algorithm) is to have a deep understanding of the GPU architecture, particularly with regard to using shared memory, external memory access patterns, register usage, thread occupancy, warps, etc.
Maybe you could post your kernel code here and get some feedback ?
The nVidia CUDA developer forum forum is also a good place to go for help with this kind of problem.
I hung back because I'm no CUDA expert, and the other answers are pretty good IF the code is already pretty near optimal. In my experience, that's a big IF, and there's no harm in verifying it.
To verify it, you need to find out if the code is for sure not doing anything it doesn't really have to do. Here are ways I can see to verify that:
Run the same code on the vanilla processor, and either take stackshots of it, or use a profiler such as Oprofile or RotateRight/Zoom that can give you equivalent information.
Running it on a CUDA processor, and doing the same thing, if possible.
What you're looking for are lines of code that have high occupancy on the call stack, as shown by the fraction of stack samples containing them. Those are your "bottlenecks". It does not take a very large number of samples to locate them.

mono for emdedded

I'm a C# developer, I'm interested in embedded development for chips like MSP430. Please suggest some tools and tutorials.
Mono framework is very powerful and customizable, mono specific examples will be more helpful.
Mono requires a 32 bit system, it is not going to work on 16-bit systems.
There is currently no full mono support for the MSP430.
Mono doesn't run in a vacuum - you will need to make a program that exposes the microcontroller functionality to Mono, then link to Mono and program the entire thing on the microcontroller. This program will have to provide some functionality to Mono that is normally provided by an operating system.
The paged igorgue linked to gives you a good starting point for this process: http://www.mono-project.com/Embedding%5FMono
I don't know what the requirements of the Mono VM are, though. It may be easy to compile and use, or you may have to write a lot of supporting code, or dig deep into mono to disable code you won't be using, or can't support on the chosen microcontroller.
Further, Mono isn't gargantuan, but it's complex and designed with larger 32 bit processors in mind. It may or may not fit onto the relatively limited 16 bit MSP430.
However, the MSP430 does have a GCC port, so you don't have to port the mono code to a new compiler, which should make your job easier.
Good luck, and please let us know what you decide to do, and how it works out!
-Adam
The tools to use Mono on an MSP430 just aren't available. Drop all the C# and use C/C++ instead.
MSP devices usually have 8 to 256KB Flash and 256 bytes (!) to 16kBytes of RAM.
Using C# or even c++ is really not an option. Also, complex frameworks are a no-go.
If you really want to start with MSP430 (which are powerful, fast and extremely low-power processors for their area of use), you should look for the MSPGCC toolchain.
http://mspgcc.sourceforge.net/
It contains compiler (GCC3.22 based) along with all necessary tools (make, JTAG programmer etc.). Most MSP processors are supported with code optimisation and support of internal hardware such as the hardware multiplier.
All you need is an editor (yopu can use Eclipse, UltraEdit or even the normal Notepad) and some knowledge about writing a simple makefile.
And you should prepare to write tight code (especially in terms of ram usage).
I think that Netduino can be of some interest for you.
Visit their web site at http://netduino.com/.
It's opensource hardware (like Arduino, http://www.arduino.cc/).
It runs .NET Micro Framework (http://www.microsoft.com/en-us/netmf/default.aspx), the breed oriented to embedded development.
Regards,
Giacomo