What is the difference between the Lego Mindstorms 1.0 and 2.0 - firmware

I am thinking about buying a mindstorms kit (I don't currently own one but I have used 1.0 at university) and I am a bit unsure as to the benefits of 2.0 over 1.0. I have seen other posts on the subject all saying generally 2.0 is better but I have some more specific questions about this that I can't seem to find any answers on.
Apart from the different lego pieces and sensors you get with the 2.0 kit, is there any difference between a 1.0 nxt brick and 2.0 nxt brick? From what I can determine from other sources, they are the same except for the firmware installed. Am I right in saying I could buy a 1.0 kit and install the same firmware that comes with the 2.0 kit and the bricks would be the same or is the 1.0 brick not compatible with the 2.0 firmware???
Also, I plan to use a different programming language like c or java so I need to install specific firmware for that anyway like librcx or lejos right? So if using c or java as opposed to the provided lego coding methods it doesn't matter if I am using 1.0 or 2.0 (except for the lego pieces in the kit) am I right?
In a nutshell, assuming I am using librcx or lejos and I don't care about the sensors and lego pieces included, is there any benefit to buying a 2.0 kit over the 1.0 kit?
Thanks in advance

I've done a bit more research and from what I can determine there is no difference in the NXT hardware from the 1.0 and 2.0 kits.
The NXT provided with the 2.0 kit uses firmware v1.28 which can be downloaded from the Lego website for free and can also be installed on the NXT which comes with the 1.0 kit making them identical.
If using something like librcx or lejos then this will replace the firmware anyway so again the 1.0 and 2.0 bricks are interchangeable.
In a nutshell, the only apparent difference between the 1.0 and 2.0 kits is the Lego pieces and sensors which can always be bought separately if required. 1.0 kits are generally about £40-£50 cheaper on eBay so you can save yourself some money by buying an older kit and purchasing the extra parts although the new colour sensor supplied with 2.0 is about £40 any way so if you want that then you may be as well just getting 2.0 anyway.
Hope this helps!

I think it also depends on whether you have a retail or educational kits. Educational kits come with rechargeable battery cells which are different between 1.0 and 2.0. The battery packs have different sized connections and thus have different power adapters for recharging.
Here is a tidbit of information, Lego will be releasing a new Mindstorm EV3 late this year
http://mindstorms.lego.com/en-us/News/ReadMore/Default.aspx?id=476243

They are pretty much actually identical besides that 2.0 has RGB LED light which is nothing very special and the 1.0 kit has a different sensor that 2.0 does not have. So actually, 1.0 is a tiny bit better than 2.0. LEGO Mindstorms NXT 1.0 costs a lot cheaper than NXT 2.0.

Actually, there is one big advantage of the 1.0 brick - it has DC input, so you can power it with a 9V power supply and not wear down your batteries. This is invaluable when you are prototyping or building stationary robots or model trains. None of the later models have this excellent feature.
There's some misinformation in some of the other answers to this question. I'd go to Wikipedia which has a very good writeup here:
http://en.wikipedia.org/wiki/Lego_Mindstorms#RCX

Related

Kinect fusion with Kinect 2.0

I am looking for a code which will able to perform kinect fusion as done by Newcombe with Kinect v2.0 . I know that that principle used for Kinect 1 is different from the principle used for Kinect 2.0 . I have found one famous open source library which does this, pcl. But The code is specific for kinect 1.0. Is there any opensource library which can do this for Kinect 2.0. If not is there a way I can refactor the current open source codes to make it for 2.0 . If I am refactoring the code for 2.0 then is there anything else I have to modify other than, changing the resolution of the camera outputs? Is there a way I can compensate for extra noisy result given by Kinect 2.0 other than increasing the smoothing in bilateral filtering. Is there a way I can handle the extra distorted results given by Kinect 2.0? I looking to use libfreenect2.
Thanks :)

glmaterialfv is deprecated in opengl es2?

I found some functions like glmaterialfv are no longer available in OpenGL ES2 headers.
like the following method,
glmaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE,color)
How to set the materials using OpenGL ES2? I need to set both Front and Back ambient, diffuse colors?
The fixed function pipeline is not available in ES 2.0. So everything that includes materials, lights, the matrix stack, etc., is gone. If you look at the official spec file, ES 2.0 was actually specified as a new API, not a new version of the ES 1.1 API.
With ES 2.0, you have to write your own shader programs in GLSL for lighting calculations, and a lot of other functionality that the fixed pipeline previously handled for you. The initial hurdle might look higher than it is for ES 1.1, but you will get used to it pretty quickly, and then appreciate the new power and flexibility.
You should be able to find some good tutorials for ES 2.0 online.
OpenGL ES 2.0 is not backwards compatible with OpenGL ES 1.1 - these are two completely different APIs.

Official Kinect SDK vs. Open-source alternatives

Where do they differ?
What are the advantages of choosing libfreenect or OpenNI+SensorKinect, for example, over the Official SDK, and vice-versa?
What are the disadvantages?
Please note that the below answer is per date and some facts may very well be outdated in the near future. Current state of the Official Kinect SDK is beta 1.00.12.
The first obvious difference is that the official SDK is maintained by the Microsoft Research team while OpenKinect is an open source SDK maintained by the open source community. Both has its cons and pros.
The Official SDK is developed by Microsoft which also develops the hardware and therefore should know internal information about the device that the open source society must reverse engineer. Obviously this is to Microsoft's advantage.
Microsoft is pouring a lot of money into this device and I am sure that they will do what they feel is necessary to keep their SDK up to par. Having economy behind it gives many advantages.
On the other hand, never underestimate the force of the open source society: "The OpenKinect community consists of over 2000 members contributing their time and code to the Project. Our members have joined this Project with the mission of creating the best possible suite of applications for the Kinect. OpenKinect is a true "open source" community!" - http://openkinect.org/wiki/Main_Page.
OpenKinect was released long before the official SDK as the kinect device was hacked on the first or second day of its release. Kudos to OpenKinect!
Programming languages supported:
Official SDK: C++, C#, or Visual Basic by using Microsoft Visual Studio 2010.
OpenKinect: Python, C, C++, C#, Java, Lisp and more! Obviously not requiring Visual Studio.
Operating systems support:
Official SDK: only installs on Windows 7.
OpenKinect: runs on Linux, OS X and Windows
Clearly advantage OpenKinect.
License:
The Official SDK is in its current beta state only for testing. The SDK has been developed specifically to encourage wide exploration and experimentation by academic, research and enthusiast communities. commercial applications are not permitted. Note however that this will probably change in future releases of the SDK. Visit the FAQ for more information
OpenKinect appers to be open for commercial usage, but online sources state that it may not be that simple. I would take a good look at the terms before releasing any commercial apps with it. Read Kinect – Licensing implications of open hardware projects for more info.
Documentation and support:
Official SDK: well documented and provides a support forum
OpenKinect: appears to have a mailing list, twitter and irc. but no official forum/QA? Documentation on website is not as rich as I would like it to be.
Device calibration:
Different Kinect devices may differ slightly depending on the batch that they were produced in. Thus device calibration is sometimes required. But:
the Official SDK does not provide any calibration settings but I have so far not had to calibrate the device I am working on. According to something I read online (link lost) at production time the calibration parameters are written to the kinect device, so with the Official SDK calibration is not needed.
OpenKinect features device calibration: http://openkinect.org/wiki/Calibration. Thus I believe that you should calibrate your device if you go with OpenKinect.
If its true that calibration is only needed for OpenKinect that is a big advantage for the official SDK as it is easier to distribute and install applications without such need.
Personally, after a failed try with the OpenKinect SDK I went with the official SDK, which
came with drivers that installed out of the box
came with examples and code for easy getting into business
All-in-all: I could start my own development within 15 minutes or so.
Now, after working with the Kinect for a few months, I have to say that I am quite satisfied with the API provided. I cannot however compare it to the OpenKinect SDK as I in fact never got it working (but perhaps it didn't give it a fair try).
UPDATE: As of February 1st 2012 there is a commercial license for the official SDK:
"The commercial license for this release authorizes development and distribution of commercial applications. The prior SDK was a beta, and as a result was appropriate only for research, testing and experimentation, and was not suitable for use with a final, commercial product. The new license will enable developers to create and sell their Kinect for Windows applications to end user customers using Kinect for Windows hardware on Windows platforms."
Developer Frequently Asked Questions
As explained by Avada Kedavra in his/her answer, these are some interesting differences:
supported operating systems: you can only use Microsoft SDK on Windows, while open source solutions are usually able to work on other operating systems;
programming languages: you have a wider choice with open source solutions, while Microsoft only supports C++ and C# (Visual Basic is no more supported with SDK 2.0);
documentation and support: Microsoft offer a good forum and a well done documentation (with a lot of samples); but there are several open source solution well documented;
license: Microsoft is less or more proprietary, open source is less or more free. Consider also that open source ideas have sometimes been bought by big companies, and transformed in something that is no more open. Probably yours will not be the case, but keep in mind this additional eventuality.
In my personal opinion, the most significant difference between open source solutions and Microsoft SDKs is strictly related to the skeletal tracking algorithm.
While depth and RGB data can be effectively provided by both open/free APIs and Microsoft SDKs, implementing skeletal tracking capabilities is not only a matter of reverse engineering.
To implement such an algorithm, developers must have strong competences in pattern recognition and machine learning areas, and I am quite sure that such kind of knowledge is available among the open source community. But the implementation of skeletal tracking is based on a "trained" algorithm, that requires a lot of experiments to collect very large amount of data. These data are then used to "train" the algorithm, that can recognize the skeletal joints.
Getting enough data, but also adjusting and properly using them, requires a lot of time and money. Microsoft researchers and developers are in the best conditions to work on this kind of stuff, simply because it is their job.
In my previous experiences, I noticed that open source solutions provide good skeletal tracking capabilities, but they are not at the same level of what Microsoft offers with its SDK.
Remember also that Microsoft SDK provide a lot of additional capabilities, like facial recognition or joint orientation, and several widgets very useful if you want to fastly build a gestural GUI.
So what I suggest is: if you are working on a project in which you simply need depth and/or RGB data, or if you have the necessity to use a programming language that is not supported by Microsoft SDK, then you should opt for open source solution. Otherwise, Microsoft SDK would be my best choice.
I would strongly recommend the Cinder framework. (libcinder.org)
It supports both OpenNI and Kinect develoment, if you're using C++. It now supports Kinect SDK 1.7 and OpenNI 2, via these Cinderblocks:
MS Kinect SDK 1.7 (stable)
https://github.com/BanTheRewind/Cinder-MsKinect
OpenNI 2 / NITE 2.2 (alpha)
https://github.com/wieden-kennedy/Cinder-OpenNI
Both can do skeletal tracking out of the boz, OpenNI being capable of tracking up to six skeletons simultaneously. OpenNI 2 is gaining rapidly on the Kinect, although the new Kinect will probably change that when it comes out next month. However the basic underlying principles are unlikely to change.
The main drawback with the initial release of OpenNI was that it required a full body activation pose to recognise a user, which was a deal breaker for a lot of applications - however this seems to have been solved in the newer versions and OpenNI 2 also supports robust hand tracking at close range, although it still requires a focus gesture initially. If you work on Mac or Linux, it's pretty much your only choice.

OpenKinect Maturity

I'm interested in writing some homebrew code for the Microsoft Kinect console. I have a few applications which I think would translate well to the platform. I've been toying with the idea of giving it a shot using the OpenKinect drivers and libraries. Obviously this would be a lot of work, but I am wondering just how much. Does anyone have experience with OpenKinect? Do you get only the raw video/audio data from the device, or has anyone written higher level abstractions to make common tasks easier?
The OpenKinect library is basically a driver — at least for now — so don't expect much high functions from it. You will more or less get the raw data from both the depth and the video cameras.
This is basically an array received in a callback function each time a frame arrives.
You can give it a try by following the instructions provided on the OpenKinect website, it's really quick to install and try it, and you can play a bit with the glview application provided to get a feeling of what's possible.
I've set up a few demos using opencv, and got pretty cool results even though I didn't have much background in computer vision so I can only encourage you to try it yourself!
Alternately, if you're looking for more advanced functions, the OpenNI framework was just released this week and provides some impressive high level algorithms such as skeleton tracking and some gesture recognition. Part of the framework is proprietary algorithms from PrimeSense (like the powerful skeleton tracking module...). I haven't tried it yet and don't know how well it integrates with the kinect and the different OS, but since a bunch of guys from different groups (OpenKinect, Willow Garage...) are working hard on it that shouldn't be an issue within a week.
Elaborating further on what Jules Olleon wrote, i've worked with OpenNI (http://www.openni.org) and the algorithms above it (NITE), and I highly recommend using these frameworks. Both frameworks are well-documented, and come with numerous samples from which you can start out.
Basically, OpenNI abstracts the lower-level details of working with the sensor and its driver for you, and gives you a convenient way to get what you want from a "generator" (e.g. xn::DepthGenerator for getting the raw depth data). OpenNI is open-source and free to use in any application. OpenNI also handles the platform-abstraction for you. As of today, OpenNI is supported and works fine for Windows 32/64 and linux, and is in the process of being ported to OSX. Bindings are available for use in multiple programming languages (C, C++, .NET, Python, and a few others I believe).
NITE has additional interfaces built above OpenNI, which give you higher-level results (e.g. track a hand-point, skeletons, scene analysis etc). You'll want to check the subtleties of NITE's license regarding when/where you can use it, but it's still probably the easiest and fastest way to get analysis (e.g. skeleton) for now. NITE is closed-source, so PrimeSense need to supply a binary version for you to use. Currently windows and linux versions are available.
I haven't worked with with OpenKinect but I've been working with OpenNI and SensorKinect for a few months now for my research. If you are planning to work with raw data from Kinect, they work great in giving you depth and video (they don't support motor control). I've used it with C++ and OpenGL in both Windows 64bit and Ubuntu 32bit with almost no modifications to the code. It's very easy to learn if you know basic c++. Installing it might be a little headache.
For more advanced features such as skeleton detection, gesture recognition, etc., I highly recommend using the middlewares such as NITE with OpenNI or the ones provided in here: Middlewares developed around OpenNI rather than re-inventing the wheel. Nite is also very easy to use once you have OpenNI working; e.g. joint recognition is something around 10-20 extra lines of code.
Something that I would recommend to my younger self would be to learn and work with a basic game engine (e.g. Unity) rather than directly with OpenGL. It would give you a lot better and more enjoyable graphics, less hassle and would also enable you to easily integrate your program with other tools such as PhysX. I haven't tried any, but I know there are some plugins for using Kinect drivers in Unity.

Which version of OpenGL/Direct3D should I target for optimum compatibility? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
When we develop web pages we can broadly work out which browsers to support based on market share.
When we develop in .NET we can broadly work out which .NET version to develop for based on which Windows versions have it installed.
But when developing OpenGL or Direct3D applications, how do we know which video cards people (I mean "people" as opposed to "hard core gamers" or "companies using CAD" :P) are using? Are there statistics on such things? Is there some common logic that people use to work out what version to support? Just as most companies have supported (perhaps until just recently) a minimum of IE6 in web pages, is there a general consensus to support a minimum of, say, OpenGL 1.5, or DirectX 8 or something?
I note that we can find out which specific video cards support which versions of these APIs, but how do we know which video cards people are actually using, is there any kind of research on this?
N.B. I'm more interested in OpenGL because that's what I'm using, but I mention Direct3D because I assume the same problem applies.
Market fragmentation for 3d hardware has always been a huge problem. There is no simple answer.
You need to define who your uses are. Is this a casual-oriented game that you intend to sell to people who haven't updated their computer in years? Is this a business application that is aimed at workstation-grade hardware? Are you just looking for an average middle-of-the-road game-buyer? Is this something simpler than a game like a screensaver that you plan to sell to people who don't buy games at all? Do you need to support laptops? Netbooks?
The market fragmentation is considerably worse with OpenGL than it is with DirectX. The official standard requirements from the Khronos Group are all well and good. But the hardware vendors tend to be very slow to update their drivers to match the standard. Many features are required by the GL spec, but are only implemented in a fallback software path that is absolutely unusable in commercial software. The new OpenGL 3.1 spec tries to improve this situation by removing support for most older, poorly supported features. But if you need to support hardware more than a couple years old (or most modern Intel integrated GPUs) then GL 3.1 would be aiming too high.
A good place to start for general hardware usage stas among game purchasers is the Steam Hardware Survey ( http://store.steampowered.com/hwsurvey ). Steam is the most popular digital game distribution service that covers a variety of games from casual to core. They reset the numbers periodically to keep it current. Last year they reported that they had 25 million active users, so the sample population is pretty good.
So you probably need to narrow your target customer group down more. I would recommend picking some recent competing applications that you consider to have a similar customer base to yours and basing your target hardware around what they require.
OpenGL 2.1 is a good bet. The newer OpenGL 3 doesn't offer that much more functionality. You have to check for the availability of all of the OpenGL extension anyway,so you don't loose much by sticking with 2.1.
For DirectX: Use DirectX 9c. That is the latest version that still runs on WindowsXP. Drivers are stable and very mature. DirectX 10 offers more functionality but you will lock out the user-base that still runs WindowsXP.
About compatibility for non-gamers: Graphic cards that don't support these APIs (at least to a usable degree) have died out more than five years ago. Given the typical life-cycle of a PC you can be almost sure noone will have problems.
If any user complains that the software doesn't run on his 10 year old matrox-parhelia card he should just buy the cheapest graphic card he can get. It will run much faster and will cost a fraction of the software.
It depends on the timeframe of the project on the DirectX side.
If the project is to be ready in months, then follow the advice from Nils.
IF the time is over a year, I would reconsider limiting to DirectX9 as the XP machines are getting lesser and lesser with the time. DirectX11 would be a good idea, especially with retrocompatibility till feature level 9.0.