what is the difference in the different country versions of the kinect for windows. There are versions like L6M-00002 for UK/Ireland only. Does this mean, it is a different power plug? Or is it a real different version of the device.
No the functionality is still the same and only the power source would be different. Also there might be some other selling regulations too.
Related
I am looking for a plc system for our brewery. I would like buy a second hand PLC with the necessary modules. I have seen the AB SLC500 1747-L542 cpu for a good price (120$) with a lot of modules, but I dont know, if it is new enough for a project. (Windows compatibilty, programming environment, etc)
Should I buy it, or it would be a bad decision? If it is not a good decision, what do you suggest for me? I have seen Siemens S7-200, Siemens ET 200 and others too.
Thank you.
If you want to go cheap, use something from automation direct or ez automation. You not only need a CPU, you need I/O cards, rack, power supply, software & HMI. That's going to be a ton of money up front. With the two vendors I mentioned, they bundle most of that for a much lower cost of entry.
Yes this is certainly new enough to use. However, you will need an entire rack. For instance, a ethernet, devicenet, or io cards to connect the processor to your components.
Also as Bill J mentioned, AB may be industry standard in America, but it is expensive. Depending on your brewery's income it my not be smart. Siemens is the same idea.
Quote from AB's website
Our Bulletin 1747 SLC™ 500 control platform is used for a wide variety of applications. Rockwell Automation has announced that some SLC 500 Bulletin numbers are discontinued and no longer available for sale. Customers are encouraged to migrate to our newer CompactLogix™ 5370 or 5380 control platforms.
link to website
So I would say, for a new project, no it's not worth bying in 2017.
Depending on how many points you need to use I would recommend going with the CompactLogix or MicroLogix from AB. The lowest CompactLogix is my favorit for all around tasks, I have standardized the whole plant to use it as the lowest level PLC for the simplest machines. Built in you get Ethernet capability, 16 inputs, and 16 outputs. You can expand the controller via different modules (up to 8 for the lowest PN), that can include additional discrete IO, analog modules etc.
Do not use a SLC as they are obsolete and even though you can get it to work without much trouble this is not a good choice for a new project.
It is hard to say what you need exactly without knowing the specifics of your project so I would recommend using the "integrated automation builder" (a free download from AB) to properly size a controller for your needs.
I'm new to Roku development (in R&D phase actually). I read that we can't test Roku app on simulator and need real device. If we develop an application, how will we test it?
I checked Roku developer site and different links on internet, but could not find anything that answers my questions
As per my info, Roku sells 5 devices so:
Can we do one app that supports all 5 devices?
Do we need assets in multiple resolutions?
Do I need to buy all devices?
Can we do one app that supports all 5 devices?
Yes. Roku is trying hard to keep their platform coherent, though there are performance issues between the OpenGL and non-OpenGL devices. The "legacy" models (<2222) are no more supported, the firmware is kept current for the others.
Do we need assets in multiple resolutions?
Theoretically yes, practically - not really. You can make-do with assets in only one resolution, if you RTFM and pre-plan carefully. You'll need 3 sizes of app icon, no sweat. For the real UI though, you can either do HD (720) or FHD (1080) and leave it scale accordingly - the thing is TV is very forgiving to scaling graphics because of 10ft watching distance (60" 1080p screen is "Retina" beyond 8ft). Can largely snub SD.
Do I need to buy all devices?
No. And there are much more than 5 devices that are in use - see https://forums.roku.com/viewtopic.php?f=34&t=86471&start=15#p536994 for some statistics (RokuCo does not publish statistics, so that's about the best info available). If you buy only 2 devices, i'll say get
a #42xx (Roku 3 or current Roku 2) as reference model with OpenGL
a #27xx (Roku 1 or SE) or #5xxx RokuTV as reference for "slower", non-OGLES
As 3rd model i'll say the "new HDMI stick" #3600. You can get that one as the only device, its performance is somewhere between (1) and (2) above... but i don't think developing with only 1 device is a good idea.
One thing you may not have noticed is that there are also these "Roku TV" things under Hisense/TCL/Sharp/Insignia brands, models #5xxx. These are proper TVs with proper Roku smarts - meaning can run your Roku app. And one can be had for as little as... (skimming BestBuy web) $130-150 for 24-32" screen.
And i haven't even mentioned the 4k/HDR craze here, nor the new 37xx/46xx models that will be out for the holiday season (i only expect minor, evolutionary changes there).
Disclosure: I am a Roku employee.
That's correct, you'll need an actual Roku device to test your application. You can buy them used on eBay for very cheap ($20-35), or you can buy a brand new unit from our website for $50. The latest Roku Streaming Stick (Model #3600X) is my personal favorite option, and a great value.
You don't need to buy all devices, although we do recommend having many models so that you can QA test across devices. However, one popular development approach is to build your channel on a lower-end model, which theoretically will assure it works on higher-end models as well. This will also mean you have to spend less on your purchase.
Download our Precertification Checklist and open the third sheet, which includes a list of all our model numbers and corresponding code names. I'd recommend building on a "Giga" or a "Paolo."
Think of this cost as an R&D expense. Plus you'll get to enjoy the device on your free time as well!
As for your other questions:
Yes, you will only build one app that will work on all different devices. We do recommend taking the time to make sure your app is optimized across all devices, including older devices with less processing power. Our Performance Guide is a great starting point for this.
The other option is to check if the first number of the device model is less than “3” (which indicates it's a lower-end device) and add conditionals off that, such as removing animations.
You can find two examples of this on our RokuDev GitHub page:
Hero-Grid-Channel —> Components —> LoadingIndicator —> LoadingIndicator.brs —> Line 244
Multi-Live-Channel —> Source —> Main.brs —> Line 21
Yes, you do need different assets based on resolutions. Take a look at this document: https://github.com/rokudev/docs/blob/master/design/channel-artwork.md
I am working on a project where we are going to use multiple Kinects and merge the pointclouds. I would like to know how to use two Kinects at the same time. Are there any specific drivers or embedded application?
I used Microsoft SDK but it only supports a single Kinect at a time. But for our project we cannot use multiple PCs. Now I have to find a way to overcome the problem. If someone who has some experience on accessing multiple Kinect drivers, please share your views.
I assume you are talking about Kinect v2?
Check out libfreenect2. It's an open source driver for Kinect v2 and it supports multiple Kinects on the same computer. But it doesn't provide any of the "advanced" features of the Microsoft SDK like skeleton tracking. But getting the pointcoulds is no problem.
You also need to make sure your hardware supports multiple Kinects. You'll need (most likely) a separate USB3.0 controller for each Kinect. Of course, those controllers need to be Kinect v2 compatible, meaning they need to be Intel or NEC/Renesas chips. That can easily be achieved by using PCIe USB3.0 expansion cards. But those can't be plugged into PCIe x1 slots.
A single lane doesn't have enough bandwidth. x8 or x16 slots usually work.
See Requirements for multiple Kinects#libfreenect2.
And you also need a strong enough CPU and GPU. Depth processing in libfreenect2 is done on the GPU using OpenGL or OpenCL (CPU is possible as well, but very slow). RGB processing is done on the CPU. It needs quite a bit of processing power to give you the raw data.
Working with multiple Kinect v1 sensors is very difficult because of the IR interference between the sensors.
Based on what I read on this gamastura article, Microsoft got rid of the interference problem with the time-of-flight mechanism that Kinect v2 sensor uses to gauge depth.
Does that mean I could use multiple Kinect v2 sensors at the same time, or did I misunderstand the article?
Thanks for the help!
I asked this question, in person, of the dev team at the meetup in San Francisco in April. The answer I got was:
"This feature is 3+ months away. We want to prioritize single-Kinect features before working on multiple Kinects."
I'm a researcher, and my goal is to have a bunch of odd setups, so this is a frustrating answer, but I understand that they need to prioritize usage that will be immediately useful to a larger market.
Could you connect them to multiple computers and stream data back and forth?
As #escapecharacter mentioned not likely to have support for multiple kinect v2 sensors in the very near future.
I can also confirm, one of the Kinect V2 SDK samples has this comment:
// for Alpha, one sensor is supported
this.kinectSensor = KinectSensor.Default;
I think the hardware itself is capable of avoiding the interference problem. Hopefully the slightly larger amount of data (higher res RGB stream) won't be a problem with multiple sensors(and available USB bandwidth) and it would be a matter of enabling the SDK to safely handle multiple sensor instances in the future.
I wouldn't expect a fast/quick update to the SDK enabling though, so in the meantime, although not ideal you could try either:
Using multiple V2 sensors on multiple machines communicating over a
local network, passing only processed/minimal data (to keep the delay
as small as possible)
Using multiple V1 sensors using Shake'n'Sense (pdf link to paper) to reduce interference
At least you would to a certain extent make some progress testing some of your assumptions for your project with multiple sensors, and update the project when the updated SDK is out.
I realize I misread your question, and interpreted it as "how can I connect to Kinect 2's to a computer" when you were actually asking about how to avoid interference, and Kinect 2 was your hoped-for solution.
You can hack avoiding Kinect 1 interference by lighting shaking one of them independent of the other. See here:
http://channel9.msdn.com/coding4fun/kinect/Shaking-some-sense-into-using-multiple-Kinects-with-Shake-n-Sense
One of the craziest things I've ever seen that actually worked. I was at Microsoft Research when they figured this out, and it works quite well.
You can have a Kinect v1 viewing the same scene as a Kinect v2 without interference. I know this isn't exactly what you're looking for, but it could be useful.
2 Years later, and this still cannot be done.
See:
https://social.msdn.microsoft.com/Forums/en-US/8e2233b6-3c4f-485b-a683-6bacd6a74d53/how-to-prevent-interference-between-multiple-kinect-v2-sensors?forum=kinectv2sdk
https://github.com/OpenKinect/libfreenect2/issues/424
As stated in the second link,
What happens is this: Each Kinect v2 continuously switches between different modulation frequencies. When two Kinects switch to the same frequency range, the interference occurs. They typically gradually drift into the same range and after a while drift out of that range again. So, theoretically, you just have to wait a bit until the interference is gone. The only way I found to stop the interference immediately was to disconnect (and reconnect) the concerned Kinect from its power supply
...
Quite unfortunate that these modulation frequencies aren't controllable at this time. Let's hope MS surprises us with that custom firmware
IIRC, I came across a group at MIT that got custom firmware from MS which solved the problem, but I can't seem to find the reference. Unfortunately, it is not available to the public.
I think we cant use multiple Kinect v2 in same environment because they will interfere lot comparatively kinect v1. As Kinect v2 depth sensing based on time of flight principle, multiple kinect v2 will interfere lot. For kinect v1 interference is not that much severe.
Where do they differ?
What are the advantages of choosing libfreenect or OpenNI+SensorKinect, for example, over the Official SDK, and vice-versa?
What are the disadvantages?
Please note that the below answer is per date and some facts may very well be outdated in the near future. Current state of the Official Kinect SDK is beta 1.00.12.
The first obvious difference is that the official SDK is maintained by the Microsoft Research team while OpenKinect is an open source SDK maintained by the open source community. Both has its cons and pros.
The Official SDK is developed by Microsoft which also develops the hardware and therefore should know internal information about the device that the open source society must reverse engineer. Obviously this is to Microsoft's advantage.
Microsoft is pouring a lot of money into this device and I am sure that they will do what they feel is necessary to keep their SDK up to par. Having economy behind it gives many advantages.
On the other hand, never underestimate the force of the open source society: "The OpenKinect community consists of over 2000 members contributing their time and code to the Project. Our members have joined this Project with the mission of creating the best possible suite of applications for the Kinect. OpenKinect is a true "open source" community!" - http://openkinect.org/wiki/Main_Page.
OpenKinect was released long before the official SDK as the kinect device was hacked on the first or second day of its release. Kudos to OpenKinect!
Programming languages supported:
Official SDK: C++, C#, or Visual Basic by using Microsoft Visual Studio 2010.
OpenKinect: Python, C, C++, C#, Java, Lisp and more! Obviously not requiring Visual Studio.
Operating systems support:
Official SDK: only installs on Windows 7.
OpenKinect: runs on Linux, OS X and Windows
Clearly advantage OpenKinect.
License:
The Official SDK is in its current beta state only for testing. The SDK has been developed specifically to encourage wide exploration and experimentation by academic, research and enthusiast communities. commercial applications are not permitted. Note however that this will probably change in future releases of the SDK. Visit the FAQ for more information
OpenKinect appers to be open for commercial usage, but online sources state that it may not be that simple. I would take a good look at the terms before releasing any commercial apps with it. Read Kinect – Licensing implications of open hardware projects for more info.
Documentation and support:
Official SDK: well documented and provides a support forum
OpenKinect: appears to have a mailing list, twitter and irc. but no official forum/QA? Documentation on website is not as rich as I would like it to be.
Device calibration:
Different Kinect devices may differ slightly depending on the batch that they were produced in. Thus device calibration is sometimes required. But:
the Official SDK does not provide any calibration settings but I have so far not had to calibrate the device I am working on. According to something I read online (link lost) at production time the calibration parameters are written to the kinect device, so with the Official SDK calibration is not needed.
OpenKinect features device calibration: http://openkinect.org/wiki/Calibration. Thus I believe that you should calibrate your device if you go with OpenKinect.
If its true that calibration is only needed for OpenKinect that is a big advantage for the official SDK as it is easier to distribute and install applications without such need.
Personally, after a failed try with the OpenKinect SDK I went with the official SDK, which
came with drivers that installed out of the box
came with examples and code for easy getting into business
All-in-all: I could start my own development within 15 minutes or so.
Now, after working with the Kinect for a few months, I have to say that I am quite satisfied with the API provided. I cannot however compare it to the OpenKinect SDK as I in fact never got it working (but perhaps it didn't give it a fair try).
UPDATE: As of February 1st 2012 there is a commercial license for the official SDK:
"The commercial license for this release authorizes development and distribution of commercial applications. The prior SDK was a beta, and as a result was appropriate only for research, testing and experimentation, and was not suitable for use with a final, commercial product. The new license will enable developers to create and sell their Kinect for Windows applications to end user customers using Kinect for Windows hardware on Windows platforms."
Developer Frequently Asked Questions
As explained by Avada Kedavra in his/her answer, these are some interesting differences:
supported operating systems: you can only use Microsoft SDK on Windows, while open source solutions are usually able to work on other operating systems;
programming languages: you have a wider choice with open source solutions, while Microsoft only supports C++ and C# (Visual Basic is no more supported with SDK 2.0);
documentation and support: Microsoft offer a good forum and a well done documentation (with a lot of samples); but there are several open source solution well documented;
license: Microsoft is less or more proprietary, open source is less or more free. Consider also that open source ideas have sometimes been bought by big companies, and transformed in something that is no more open. Probably yours will not be the case, but keep in mind this additional eventuality.
In my personal opinion, the most significant difference between open source solutions and Microsoft SDKs is strictly related to the skeletal tracking algorithm.
While depth and RGB data can be effectively provided by both open/free APIs and Microsoft SDKs, implementing skeletal tracking capabilities is not only a matter of reverse engineering.
To implement such an algorithm, developers must have strong competences in pattern recognition and machine learning areas, and I am quite sure that such kind of knowledge is available among the open source community. But the implementation of skeletal tracking is based on a "trained" algorithm, that requires a lot of experiments to collect very large amount of data. These data are then used to "train" the algorithm, that can recognize the skeletal joints.
Getting enough data, but also adjusting and properly using them, requires a lot of time and money. Microsoft researchers and developers are in the best conditions to work on this kind of stuff, simply because it is their job.
In my previous experiences, I noticed that open source solutions provide good skeletal tracking capabilities, but they are not at the same level of what Microsoft offers with its SDK.
Remember also that Microsoft SDK provide a lot of additional capabilities, like facial recognition or joint orientation, and several widgets very useful if you want to fastly build a gestural GUI.
So what I suggest is: if you are working on a project in which you simply need depth and/or RGB data, or if you have the necessity to use a programming language that is not supported by Microsoft SDK, then you should opt for open source solution. Otherwise, Microsoft SDK would be my best choice.
I would strongly recommend the Cinder framework. (libcinder.org)
It supports both OpenNI and Kinect develoment, if you're using C++. It now supports Kinect SDK 1.7 and OpenNI 2, via these Cinderblocks:
MS Kinect SDK 1.7 (stable)
https://github.com/BanTheRewind/Cinder-MsKinect
OpenNI 2 / NITE 2.2 (alpha)
https://github.com/wieden-kennedy/Cinder-OpenNI
Both can do skeletal tracking out of the boz, OpenNI being capable of tracking up to six skeletons simultaneously. OpenNI 2 is gaining rapidly on the Kinect, although the new Kinect will probably change that when it comes out next month. However the basic underlying principles are unlikely to change.
The main drawback with the initial release of OpenNI was that it required a full body activation pose to recognise a user, which was a deal breaker for a lot of applications - however this seems to have been solved in the newer versions and OpenNI 2 also supports robust hand tracking at close range, although it still requires a focus gesture initially. If you work on Mac or Linux, it's pretty much your only choice.