When developing for GNU Radio, should I use WX GUI or Qt GUI widgets? - gnuradio

I'm starting to use GNU Radio. Which of the available GUI toolkits should I choose?

You should always prefer Qt these days – it's the default GUI toolkit for GNU Radio 3.7. No-one develops the WX functionality anymore, and bugfixes haven't been available in a long time. Also, it is well-known that WX has some performance issues, especially on integrated graphic cards.
Furthermore, WXGui has been completely dropped from the current release of GNU Radio, 3.8.

Related

VGA card(GPU) for Microsoft Cognitive CNTK/TensorFlow

Where can I purchase good VGA card(GPU) for Microsoft Cognitive CNTK/TensorFlow programming? can you suggest commonly used GPU model with affordable price?
For tensorflow programming you can find a list of CUDA compatible graphics cards here and choose whichever fits your needs best but know if you want to use the Tensorflow-gpu pre-built binaries you need a card with CUDA Compatability of 3.5 or higher(at least on windows).
If you want to run tensorflow on windows 10 with a graphics card of CUDA capability of 3.0 you could look at this, you would build from source with some edits to the CMakeList here This would allow you to find a card with 3.0 compatibility.
Although as a fair warning on building your own tensorflow binary "don't build a TensorFlow binary yourself unless you are very comfortable building complex packages from source and dealing with the inevitable aftermath should things not go exactly as documented." from here
You need a CUDA-compatible graphics card with Compute Capability (CC) 3.0 or more to use CNTK GPU capabilities.
You can find CUDA-compatible graphics cards here and here (for older cards)
You can also use Microsoft Azure’s specialized Deep Learning Virtual Machine if you’re considering the cloud as an option.

Missing popup when hovering over FFT sink in gnuradio

I've been watching Michael Ossmann's video guides on SDR on Great Scott Gadgets. In his videos, he hovers over the TTF sink which displays frequencies, power and TTF. If I do this on OS X, I don't get this yellow popup - besides the fact that my TTF sink looks differently.
Is this a setting in gnuradio, or might it be an OS X issue? I'm running gnuradio 3.7.9.1.
Screenshot from the video:
Screenshot from my application:
EDIT: It appears that installing pyopengl solves this, as WX is falling back to "something". Install it using pip install pyopengl and restart gnuradio.
I think WX fell back to the non-GL version. As Marcus said, this can have a lot of reasons. On OSX, I was, however, only missing python-opengl. Maybe that's also your problem.
I agree that we should switch to QT in the long run, but since there are still several WX projects around, it might be worth having a working installation.
(Couldn't comment since I don't have the required reputation.)
It's a bit hard to say what goes wrong with the graphical sink in your case.
However, what I'd propose:
Abandon the WX GUI sinks in favor of the Qt GUI sinks (you will have to change from WX to the QT build option in the option block, and replace your sinks with their QT equivalent). WX is barely still maintained, and might/will be removed from GNU Radio at some point – the QT GUI sinks are just more effective, more portable, and have cool features (try the middle-mouse-button menu!).
In case of a Frequency Plot, things would look something like:

C++ ide to run on an embedded linux device

I am putting together a very simple device for developing C++ applications on the go. Something like a a psp, size wise, with a small keyboard to write code and LCD screen for the output. This device has also a DVI out, so I can output to screen too.
Now, I am looking for a very lightweight C++ IDE for X11 (the device runs on a Linux variant of OpenEmbedded); something that is able to work decently on both the DVI monitor but mainly on the small lcd screen (it is a 4.3 inch screen with a 480x272 resolution).
I am aware that anything under 1024x768 is just pure joy for pain lovers, but the device will be used only in particular situations; when a laptop/netbook is not feasible.
While using the DVI out, I have no problems running Gnome, the screen is big enough to allow me to work comfortably. the problem is that it won't scale very well on the 480x272 screen.
The first idea was to use nano and g++, without even loading X11, but I need to have something with code completion and a minimal UI to do the standard operations (build, run, debugging step by step, breakpoints), if possible. The memory is not that big (256 Mb), so the smaller the IDE, the better it is.
What would you suggest, other than the approach via text editor and g++ ? I am new to the embedded linux world, I use Eclipse on Ubuntu, but that one is incredibly huge and on such small device would just kill it. On Windows I would use Dev-C++, while on Mac I use either code:blocks or Xcode, but I don't really need all these features for the device.
Thanks in advance
I had the same problem, developing on an ODroid (http://hardkernel.com) running Linux on an ARM processor. For a while I used Netbeans, but it was too heavy, so I finally gave up and wrote my own open source IDE for the purpose.
https://github.com/amirgeva/coide

Official Kinect SDK vs. Open-source alternatives

Where do they differ?
What are the advantages of choosing libfreenect or OpenNI+SensorKinect, for example, over the Official SDK, and vice-versa?
What are the disadvantages?
Please note that the below answer is per date and some facts may very well be outdated in the near future. Current state of the Official Kinect SDK is beta 1.00.12.
The first obvious difference is that the official SDK is maintained by the Microsoft Research team while OpenKinect is an open source SDK maintained by the open source community. Both has its cons and pros.
The Official SDK is developed by Microsoft which also develops the hardware and therefore should know internal information about the device that the open source society must reverse engineer. Obviously this is to Microsoft's advantage.
Microsoft is pouring a lot of money into this device and I am sure that they will do what they feel is necessary to keep their SDK up to par. Having economy behind it gives many advantages.
On the other hand, never underestimate the force of the open source society: "The OpenKinect community consists of over 2000 members contributing their time and code to the Project. Our members have joined this Project with the mission of creating the best possible suite of applications for the Kinect. OpenKinect is a true "open source" community!" - http://openkinect.org/wiki/Main_Page.
OpenKinect was released long before the official SDK as the kinect device was hacked on the first or second day of its release. Kudos to OpenKinect!
Programming languages supported:
Official SDK: C++, C#, or Visual Basic by using Microsoft Visual Studio 2010.
OpenKinect: Python, C, C++, C#, Java, Lisp and more! Obviously not requiring Visual Studio.
Operating systems support:
Official SDK: only installs on Windows 7.
OpenKinect: runs on Linux, OS X and Windows
Clearly advantage OpenKinect.
License:
The Official SDK is in its current beta state only for testing. The SDK has been developed specifically to encourage wide exploration and experimentation by academic, research and enthusiast communities. commercial applications are not permitted. Note however that this will probably change in future releases of the SDK. Visit the FAQ for more information
OpenKinect appers to be open for commercial usage, but online sources state that it may not be that simple. I would take a good look at the terms before releasing any commercial apps with it. Read Kinect – Licensing implications of open hardware projects for more info.
Documentation and support:
Official SDK: well documented and provides a support forum
OpenKinect: appears to have a mailing list, twitter and irc. but no official forum/QA? Documentation on website is not as rich as I would like it to be.
Device calibration:
Different Kinect devices may differ slightly depending on the batch that they were produced in. Thus device calibration is sometimes required. But:
the Official SDK does not provide any calibration settings but I have so far not had to calibrate the device I am working on. According to something I read online (link lost) at production time the calibration parameters are written to the kinect device, so with the Official SDK calibration is not needed.
OpenKinect features device calibration: http://openkinect.org/wiki/Calibration. Thus I believe that you should calibrate your device if you go with OpenKinect.
If its true that calibration is only needed for OpenKinect that is a big advantage for the official SDK as it is easier to distribute and install applications without such need.
Personally, after a failed try with the OpenKinect SDK I went with the official SDK, which
came with drivers that installed out of the box
came with examples and code for easy getting into business
All-in-all: I could start my own development within 15 minutes or so.
Now, after working with the Kinect for a few months, I have to say that I am quite satisfied with the API provided. I cannot however compare it to the OpenKinect SDK as I in fact never got it working (but perhaps it didn't give it a fair try).
UPDATE: As of February 1st 2012 there is a commercial license for the official SDK:
"The commercial license for this release authorizes development and distribution of commercial applications. The prior SDK was a beta, and as a result was appropriate only for research, testing and experimentation, and was not suitable for use with a final, commercial product. The new license will enable developers to create and sell their Kinect for Windows applications to end user customers using Kinect for Windows hardware on Windows platforms."
Developer Frequently Asked Questions
As explained by Avada Kedavra in his/her answer, these are some interesting differences:
supported operating systems: you can only use Microsoft SDK on Windows, while open source solutions are usually able to work on other operating systems;
programming languages: you have a wider choice with open source solutions, while Microsoft only supports C++ and C# (Visual Basic is no more supported with SDK 2.0);
documentation and support: Microsoft offer a good forum and a well done documentation (with a lot of samples); but there are several open source solution well documented;
license: Microsoft is less or more proprietary, open source is less or more free. Consider also that open source ideas have sometimes been bought by big companies, and transformed in something that is no more open. Probably yours will not be the case, but keep in mind this additional eventuality.
In my personal opinion, the most significant difference between open source solutions and Microsoft SDKs is strictly related to the skeletal tracking algorithm.
While depth and RGB data can be effectively provided by both open/free APIs and Microsoft SDKs, implementing skeletal tracking capabilities is not only a matter of reverse engineering.
To implement such an algorithm, developers must have strong competences in pattern recognition and machine learning areas, and I am quite sure that such kind of knowledge is available among the open source community. But the implementation of skeletal tracking is based on a "trained" algorithm, that requires a lot of experiments to collect very large amount of data. These data are then used to "train" the algorithm, that can recognize the skeletal joints.
Getting enough data, but also adjusting and properly using them, requires a lot of time and money. Microsoft researchers and developers are in the best conditions to work on this kind of stuff, simply because it is their job.
In my previous experiences, I noticed that open source solutions provide good skeletal tracking capabilities, but they are not at the same level of what Microsoft offers with its SDK.
Remember also that Microsoft SDK provide a lot of additional capabilities, like facial recognition or joint orientation, and several widgets very useful if you want to fastly build a gestural GUI.
So what I suggest is: if you are working on a project in which you simply need depth and/or RGB data, or if you have the necessity to use a programming language that is not supported by Microsoft SDK, then you should opt for open source solution. Otherwise, Microsoft SDK would be my best choice.
I would strongly recommend the Cinder framework. (libcinder.org)
It supports both OpenNI and Kinect develoment, if you're using C++. It now supports Kinect SDK 1.7 and OpenNI 2, via these Cinderblocks:
MS Kinect SDK 1.7 (stable)
https://github.com/BanTheRewind/Cinder-MsKinect
OpenNI 2 / NITE 2.2 (alpha)
https://github.com/wieden-kennedy/Cinder-OpenNI
Both can do skeletal tracking out of the boz, OpenNI being capable of tracking up to six skeletons simultaneously. OpenNI 2 is gaining rapidly on the Kinect, although the new Kinect will probably change that when it comes out next month. However the basic underlying principles are unlikely to change.
The main drawback with the initial release of OpenNI was that it required a full body activation pose to recognise a user, which was a deal breaker for a lot of applications - however this seems to have been solved in the newer versions and OpenNI 2 also supports robust hand tracking at close range, although it still requires a focus gesture initially. If you work on Mac or Linux, it's pretty much your only choice.

mono for emdedded

I'm a C# developer, I'm interested in embedded development for chips like MSP430. Please suggest some tools and tutorials.
Mono framework is very powerful and customizable, mono specific examples will be more helpful.
Mono requires a 32 bit system, it is not going to work on 16-bit systems.
There is currently no full mono support for the MSP430.
Mono doesn't run in a vacuum - you will need to make a program that exposes the microcontroller functionality to Mono, then link to Mono and program the entire thing on the microcontroller. This program will have to provide some functionality to Mono that is normally provided by an operating system.
The paged igorgue linked to gives you a good starting point for this process: http://www.mono-project.com/Embedding%5FMono
I don't know what the requirements of the Mono VM are, though. It may be easy to compile and use, or you may have to write a lot of supporting code, or dig deep into mono to disable code you won't be using, or can't support on the chosen microcontroller.
Further, Mono isn't gargantuan, but it's complex and designed with larger 32 bit processors in mind. It may or may not fit onto the relatively limited 16 bit MSP430.
However, the MSP430 does have a GCC port, so you don't have to port the mono code to a new compiler, which should make your job easier.
Good luck, and please let us know what you decide to do, and how it works out!
-Adam
The tools to use Mono on an MSP430 just aren't available. Drop all the C# and use C/C++ instead.
MSP devices usually have 8 to 256KB Flash and 256 bytes (!) to 16kBytes of RAM.
Using C# or even c++ is really not an option. Also, complex frameworks are a no-go.
If you really want to start with MSP430 (which are powerful, fast and extremely low-power processors for their area of use), you should look for the MSPGCC toolchain.
http://mspgcc.sourceforge.net/
It contains compiler (GCC3.22 based) along with all necessary tools (make, JTAG programmer etc.). Most MSP processors are supported with code optimisation and support of internal hardware such as the hardware multiplier.
All you need is an editor (yopu can use Eclipse, UltraEdit or even the normal Notepad) and some knowledge about writing a simple makefile.
And you should prepare to write tight code (especially in terms of ram usage).
I think that Netduino can be of some interest for you.
Visit their web site at http://netduino.com/.
It's opensource hardware (like Arduino, http://www.arduino.cc/).
It runs .NET Micro Framework (http://www.microsoft.com/en-us/netmf/default.aspx), the breed oriented to embedded development.
Regards,
Giacomo