Laptop requirements with kinect xbox1 - kinect

I am using kinect xbox1 for window camera, for computing skeleton data and rgb data.I am retrieving 30 frames per second. Also calculating joint values of human body and then calculate angle between joints. I want that my laptop/system compute faster values of joints and angle. And store into directory.But recently i am using my laptop which compute the joint values and angle very slowly.
Specification of my laptop are:
500GB Hard
600GB RAM
1.7GHZ processor
Kindly tell me which system i am used to calculate faster calculation. I want really fast system/laptop to calculate very fast calculations. Anyone have idea please tell me.
And also tell me the complete specifications of system. I want to use latest fastest technology or any machine which resolve my issue.

Your computer must have the following minimum capabilities:
32-bit (x86) or 64-bit (x64) processors
Dual-core, 2.66-GHz or faster processor
USB 2.0 bus dedicated to the Kinect
2 GB of RAM
Graphics card that supports DirectX 9.0c
Source: MSDN
Anyway I suggest you:
A Desktop PC
with a Processor with 3Ghz (More are usually better) multi-core processor
with a GPU compatible with DirectX 11 and C++ AMP

Related

Finding CPU Platform Name and Family

I’ve been Googling all morning trying to find a way to use VB.Net code to find the platform and family of my CPU.
For instance:
Intel Xeon (Haswell) OR Intel Xeon (Sandy Bridge)
I have a program I’ve been working on that has several different versions specifically optimized for certain CPU platforms/families/generations.
So it’s important that my software is able to find the CPU platform/family/generation (i.e. Haswell/Broadwell/Ivy Bridge/Sandy Bridge/Coffee Lake...) so that it can run the version optimized for that CPU.
Everything I found so far can give me EVERY bit of info about my CPU and/or system EXCEPT the CPU platform/family/generation.
Can any of you guys help me out with this, or provide some kind of workaround to help?

Getting a pointcloud using multiple Kinects

I am working on a project where we are going to use multiple Kinects and merge the pointclouds. I would like to know how to use two Kinects at the same time. Are there any specific drivers or embedded application?
I used Microsoft SDK but it only supports a single Kinect at a time. But for our project we cannot use multiple PCs. Now I have to find a way to overcome the problem. If someone who has some experience on accessing multiple Kinect drivers, please share your views.
I assume you are talking about Kinect v2?
Check out libfreenect2. It's an open source driver for Kinect v2 and it supports multiple Kinects on the same computer. But it doesn't provide any of the "advanced" features of the Microsoft SDK like skeleton tracking. But getting the pointcoulds is no problem.
You also need to make sure your hardware supports multiple Kinects. You'll need (most likely) a separate USB3.0 controller for each Kinect. Of course, those controllers need to be Kinect v2 compatible, meaning they need to be Intel or NEC/Renesas chips. That can easily be achieved by using PCIe USB3.0 expansion cards. But those can't be plugged into PCIe x1 slots.
A single lane doesn't have enough bandwidth. x8 or x16 slots usually work.
See Requirements for multiple Kinects#libfreenect2.
And you also need a strong enough CPU and GPU. Depth processing in libfreenect2 is done on the GPU using OpenGL or OpenCL (CPU is possible as well, but very slow). RGB processing is done on the CPU. It needs quite a bit of processing power to give you the raw data.

Kinect to PC connection through controller USB3.0

I used Kinect 2.0 connected to motherboard Gigabyte through ST-Lab U710 USB3.0 PCI-E-1 controller (claim SuperSpeed) inserted in PCI-E-16 of course. All SDK samples works fine but there are no Super Speed in driver hub usb3.0 so I get only 7 fps not 30.
I bought Foxconn H67S/H61SP motherboard, because it's claim Super Speed in PCI-E-16. I use ST-Lab U710 USB3.0 PCI-E-1 controller in PCI-E-16 again. But speed is 7 fps only and no SuperSpeed words in driver hub. Also I need make Disable\Enable HD Video Graphics for start Kinect demo now. (Disable\Enable KInect or controller USB not help.)
When I connect Kinect to more expensive MBs with USB3.0 integrated all works fine and speed is 30 fps.
My question: How to get words "Super Speed" in driver of ST-Lab U710 controller (chip Renesas\NEC mPD720202 - claim SuperSpeed)?
Also Why Host Controller Utility say: "There is not any controller USB3.0" though it works? May be controller don't turn on Super Speed?
I use Win8.1 64 bit and drivers from 2015 Drivers Pack Solution.
If the Kinect v2 works (even if only at low FPS), it is most likely not an USB problem. If the Kinect v2 can't connect as a superspeed device, it won't work at all.
It's more likely that your CPU and/or graphics card is too slow. The System Requirements claim you need the following:
4 GB Memory (or more)
I7 3.1 GHz (or higher)
DX11 capable graphics adapter
But that's pretty vague and the resulting FPS can vary widely, depending on your specific setup.

What PC hardware is needed to try out GPU programming?

I am interested to try out GPU programming. One thing not clear to me is, what hardware do I need? Is it right any PC with graphics card is good? I have very little knowledge of GPU programming, so the starting learning curve is best not steep. If I have to make a lot of hacks just in order to run some tutorial because my hardware is not good enough, I'd rather to buy a new hardware.
I have a retired PC (~10 year old) installed with Ubuntu Linux, I am not sure what graphics card it has, must be some old one.
I am also planning to buy a new sub-$500 desktop which to my casual research normally has AMD Radeon 7x or Nvidia GT 6x graphics card. I assume any new PC is good enough for the programming learning.
Anyway any suggestion is appreciated.
If you want to use CUDA, you'll need a GPU from NVidia, and their site explains the compute capabilities of their different products.
If you want to learn OpenCL, you can start right now with an OpenCL implementation that has a CPU back-end. The basics of writing OpenCL code targeting CPUs or GPUs is the same, and they differ mainly in performance tuning.
For GPU programming, any AMD or NVidia GPU made in the past several years will have some degree of OpenCL support, though there have been some new features introduced with newer generations that can't be easily emulated for older generations.
Intel's integrated GPUs in Ivy Bridge and later support OpenCL, but Intel only provides a GPU-capable OpenCL implementation for Windows, not Linux.
Also be aware that there is a huge difference between a mid-range and high-end GPU in terms of compute capabilities, especially where double-precision arithmetic is supported. Some low-end GPUs don't support double-precision at all, and mid-range GPUs often perform double-precision arithmetic 24 times slower than single-precision. When you want to do a lot of double-precision calculations, it's absolutely worth it to get a compute-oriented GPU (Radeon 7900 series or GeForce Titan and up).
If you want a low-end system with a non-trivial GPU power, you best bet at the moment is probably to get a system built around an AMD APU.

Is it possible to do GPU programming if I have an integrated graphics card?

I have an HP Pavilion Laptop, it's so-called graphics card is some sort of integrated NVIDIA driver running on shared memory. To give you an idea of its capabilities, if a videogame was made in the last 5 years at a cost of more than a couple million dollars, it just won't be playable on my computer.
Anyways, I was wondering if I could do GPU programming, like CUDA, on this thing. I don't expect it to be fast, I'd just like to get the experience and not make my laptop catch fire in the meanwhile.
Find out what GPU your laptop is, and compare it against this list: http://en.wikipedia.org/wiki/CUDA#Supported_GPUs. Most likely, CUDA will not be supported.
This doesn't necessarily prevent you from doing "GPU programming", however. If the GPU supports fragment and vertex shaders, you can use the fixed pipeline to send data to the card (for example, through texture data) and do your processing in a fragment shader. You will then do a read from the pixel buffer to get the data back into system memory. Though hackish, this approach was quite popular until CUDA and other frameworks like OpenCL were introduced.