I am trying to change dude model in kinect avateering I have tryied so many ways no one goes wright except when I replaced TGA files
I opened dude.fbx using 3D max and try just to import and re export it it goes wrong and i had errors in building
I have tried to replace the whoke dude.fbx file with a nother fbx human model file and always get the same error
"no input skeleton found
to be more specific i am doing a virtual dressing room using 3d models to make it more stable than using video processing so i need to create a 3d human modeling using programmimng language so i just used dude model
please any help in that
I'm also doing a Virtual dressing room project and i too struggled with that damn dude. There is a tutorial which is about the same topic you are trying.
http://mopred.blogspot.com/2012/11/changing-avateering-avatar-in-kinect.html
but i think the best way to create your own models and turn, rotate and translating them is then easy. animating a boned model would be so hard.
Related
What am trying to do is, count the revving("vroom" sound) of a physical car, through my app. Am coding in ReactNative. And I don't plan to create something complex, like communicating with the Car's inbuilt computer or anything to do this.
But instead, I was planning to create the app to listen to the nearby sounds. So if the nearby sound is that of a revving, then the app will simply count it.
I have done other features in my app, but listening to the sound and detect if it's a "vroom" sound is what am stuck with.
Based on my research, I can see that I have to make use of the Fast Fourier Transform algorithm. But am confused at how I can implement it in my ReactNative app. Am still searching for a package that has an implementation.
I have seen some apps that can be used to tune the sounds of Violin, Guitar, etc. What am trying to do is similar to this, but pretty simple. Once I get a basic idea, I will be able to get going. In my case, my app will be listening to the high decibel sound.
Any inputs would be highly appreciated.
This is known as Acoustic Event Detection. Possibly you can use an Audio Classification approach. The best way to solve it is using supervised machine learning. For example a CNN on mel-spectrograms. Here is an introduction. You can do the same in JavaScript using Tensorflow.JS. The official documentation contains a tutorial.
One of the first steps is to collect a small dataset of examples of "vroom" sounds versus other loud non-vroom sounds.
This year Google produced 5 different packages for seq2seq:
seq2seq (claimed to be general purpose but
inactive)
nmt (active but supposed to be just
about NMT probably)
legacy_seq2seq
(clearly legacy)
contrib/seq2seq
(not complete probably)
tensor2tensor (similar purpose, also
active development)
Which package is actually worth to use for the implementation? It seems they are all different approaches but none of them stable enough.
I've had too a headache about some issue, which framework to choose? I want to implement OCR using Encoder-Decoder with attention. I've been trying to implement it using legacy_seq2seq (it was main library that time), but it was hard to understand all that process, for sure it should not be used any more.
https://github.com/google/seq2seq: for me it looks like trying to making a command line training script with not writing own code. If you want to learn Translation model, this should work but in other case it may not (like for my OCR), because there is not enough of documentation and too little number of users
https://github.com/tensorflow/tensor2tensor: this is very similar to above implementation but it is maintained and you can add more of own code for ex. reading own dataset. The basic usage is again Translation. But it also enable such task like Image Caption, which is nice. So if you want to try ready to use library and your problem is txt->txt or image->txt then you could try this. It should also work for OCR. I'm just not sure it there is enough documentation for each case (like using CNN at feature extractor)
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/seq2seq: apart from above, this is just pure library, which can be useful when you want to create a seq2seq by yourself using TF. It have a function to add Attention, Sequence Loss etc. In my case I chose that option as then I have much more freedom of choosing the each step of framework. I can choose CNN architecture, RNN cell type, Bi or Uni RNN, type of decoder etc. But then you will need to spend some time to get familiar with all the idea behind it.
https://github.com/tensorflow/nmt : another translation framework, based on tf.contrib.seq2seq library
From my perspective you have two option:
If you want to check the idea very fast and be sure that you are using very efficient code, use tensor2tensor library. It should help you to get early results or even very good final model.
If you want to make a research, not being sure how exactly the pipeline should look like or want to learn about idea of seq2seq, use library from tf.contrib.seq2seq.
I am totally new to blender. I know to create objects but rigging is lite problem to me. I downloaded male model with everything but problem is now when i move his arm (bend it with bones), his neck get off head (2 separated objects).
Here is the image what happens. What can i do?
Before:
After:
its a bit hard to see, but if the head comes loose, then the problem is in the model. One could fix it by repairing the model, in mesh edit, make it one connected object, as a result of that, when changing the mesh(add and connect surfaces) the the weights of the orginal model wont work anymore on the model. So then you would need to reaply weights from bones to the meshmodel.
As you said your totally new to blender, i think all those steps would be a bit to much (i have repaired meshes but i got 5 years of experience), for you repairing a model might be a bit too complex (its advanced stuff usually a few hours of blender work to fix something).
It might be much easier to startoff with correct models, you can get them at blendswap, or you could install the Bastioni addon, he's one of the MakeHuman creators and transformed that code into blender. Look for Bastioni and you get real good human models who you can pose.
I understand that the kinect is using some predefined skeleton model to return the skeleton based on the depth data. That's nice, but this will only allow you the get a skeleton for people. Is it possible to define a custom skeleton model? for example, maybe you want to track your dog while he's doing something. So, is there a way to define a model for four legs, a tail and a head and to track this?
Short answer, no. Using the Microsoft Kinect for Windows SDK's skeleton tracker you are stuck with the one they give you. There is no way inject a new set of logic or rules.
Long answer, sure. You are not able to use the pre-built skeleton tracker, but you can write your own. The skeleton tracker uses data from the depth to determine where a person's joints are. You could take that same data and process it for a different skeleton structure.
Microsoft does not provide access to all the internal functions that process and output the human skeleton, so we would be unable to use it as any type of reference for how the skeleton is built.
In order to track anything but a human skeleton you'd have to rebuild it all from the ground up. It would be a significant amount of work, but it is doable... just not easily.
there is a way to learn a bit about this subject by watching the dll exemple:
Face Tracking
from the sdk exemples :
http://www.microsoft.com/en-us/kinectforwindows/develop/
I've been working with augmented reality API's lately but haven't been able to achieve irregular shape detection, namely the hand. I want to be able to detect hand shapes through the video/camera and execute code based on hand signs. Does anything like this already exist?
Did you have a look at OpenCV?
These are some of the links I found just using Google: Face Detection using OpenCV, Vision For Robots