I'm applying MediaPipe Holistic to scenes that can be challenging to parse.
I would like to retrieve in Python the computed confidence score for each landmark returned by MediaPipe's pose model. After a few hours of searching the answer seems to be "you can't get there from here":
https://github.com/google/mediapipe/issues/3104
Has anyone found a solution?
Yours truly,
Ronnen.
Related
I have a target-finding, obstacle-avoiding helicopter in Unity Machine Learning Agents. Looking at the TensorBoard for my training, I'm trying to get a feel for how to interpret the "Losses/Value Loss".
I've googled many articles on ML Loss, like this one, but I can't seem to get an intuitive understanding yet of what it all means for my little helicopter and possible changes I should implement, if any. (The helicopter is rewarded by getting closer and again for reaching the target, and punished by getting further or colliding. It measures a variety of things like relative speed, relative target position, ray sensors and so on, and it does basically work in target-finding, whereas more complicated maze type obstacles have not been tested or trained on yet. It's using 3 layers.) Thanks!
In reinforcement learning and specifically regarding actor/critic algorithms, value loss is the difference (or an average of many such differences) between the learning algorithm's expectation of a state's value and the empirically observed value of that state.
What is a state's value? A state's value is, in short, how much reward you can expect given that you start in that state. Immediate reward contributes completely to this amount. Reward that can possibly occur but not immediately contribute less, with more distant occurrences contributing less and less. We call this reduction in contribution to value a "discount", or we say that these rewards are "discounted".
Expected value is how much the critic part of the algorithm predicts the value to be. In the case of a critic implemented as a neural network, it's the output of the neural network with the state as its input.
Empirically observed value is the amount you get when you add up the rewards that you actually got when you left that state, plus any rewards (discounted by some amount) you got immediately after that for some number of steps (we'll say after these steps you ended up on state X), and (perhaps, depending on implementation) plus some discounted amount based on the value of state X.
In short, the smaller it is, the better it got at predicting how well it is going to perform. This doesn't mean that it gets better at playing - after all, one can be terrible at a game yet be accurate at predicting that they will lose and when they will lose if they learn to choose actions that will make them lose quickly!
I want to be able to label all of the muscles on an athletes body. I got a lot of the images that the athletes are almost in the same body pose but the issue that I am running into is that drawing a box around them makes them inaccurate as it ends up overlapping other muscles. Drawing exact lines around them is a bit difficult as they are a lot of smaller muscles and creates inconsistently over 20-30images. I was wondering if there is a way to feed in a human anatomy and then have tensorflow go in and label all of the muscles in given pictures.
Or I was wondering if you all had a different idea on approach this problem that I'm running into.
I don't have anybody else to ask and I've been researching this for awhile so if I missed or overlooked something please forgive me
The way i see is you need to combine with some prepossessing steps to normalize your target object in the image such as:
identify the human,
identify the pose or skeleton (which nowadays many open-source such as openpose-plus),
the pose estimation results can label the limbs, or part of the body from which you can do something either by hand-crafted image processing or other segmentation model.
I would like to increase the density of my AIS or GPS data in order to carry out more precise analyses afterwards. During my research I came across different approaches like interpolation, filtering or imputation. With the first two approaches, there is no doubt that these can be used to approximate the points between two collected data points.
In the case of imputation (e.g. MICE), however, I have not yet found an approach in the literature for determining position data.
That's why I wanted to ask if anyone knew a paper dealing with this subject and whether it makes sense at all to determine further position data approximately by imputation.
The problem you are describing there is trajectory reconstruction for AIS/GPS data. There's a number of papers for general trajectory reconstruction (see this for example), but AIS data are quite specific.
The irregularity of AIS data is a well know problem with no standard approach to deal with, as far as I know.
However, there is a handful of publications which try to deal with this issue. The problem of reconstruction is connected to the trajectory prediction problem, since both of these two shares some of the methods (the latter is more popular in the scientific community, I think).
Traditionally, AIS trajectory reconstruction is done using some physical models, which take into account the curvature of the earth and other factors, such as data noise (see examples here, here, and here).
More recent approach tries to use LSTM neural networks.
I don't know much about GPS data, but I think the methods are very similar to the ones mentioned above (especially taking into account the fact that you probably want to deal with maritime data).
Recently I stared toying with tensor flow, dnns etc. now I'm trying to implement something more serious, information retrieval from short sentences (doctor instructions).
Unfortunately the dataset I have is, as always, quite "dirty". As I'm trying to use word embeddings, I actually need "clean" data. Take one example:
"Take two pilleach day". There is a missing white space between pill and each. I am implementing "tokenizer improver" to look at each sentence and propose new tokenization based on joint probability of each word in sentence given the frequency of terms in whole document (tf) . As I was doing it today, a thought came to my mind: why bother writing suboptimal solution for this problem when I can employ powerful learning algorithms such as Lstm networks to do that for me. However, as of today, I have only a feeling that it's actually possible to do that. As we know, feelings are not best when it comes to architecting such complex problems. I don't know where to begin: what should be my training set and learning goal.
I know this is a broad question, but I know there are many brilliant people with more knowledge about tensorflow and neural nets, so I'm sure that somebody has either already solved similar problem or just knows how to approach this problem.
Any guidance is welcome, I do not except you to solve this for me of course:)
Besos and all the best to all the tensorflow community:)
Having the same issue. I solved it by using a character level net. Basically I rewrote Character-Aware Neural Language Models, kicked out the whole "words"-elements and just stayed with the caracter level.
Training Data: I took the data I had, as dirty as it was, used the dirty data as targets and made it even more dirty to create inputs.
So your "Take two pilleach day" will be learned as in many cases you do have a clean and similar phrase, e.g. "Take one pill each morning" that with the regime mentioned will serve as target and you train the net on destroyed inputs like "Take oe pileach mornin"
This issue is seen when performing training against my own dataset which was converted to binary via data_convert_example.py. After a week of training I get decode results that don't make sense when comparing the decode and ref files.
If anyone has been successful and gotten results similar to what is posted in the Textsum readme using their own data, I would love to know what has worked for you...environment, tf build, number of articles.
I currently have not had luck with 0.11, but have gotten some results with 0.9 however the decode results are similar to those shown below which I have no idea where they are even coming from.
I currently am running Ubuntu 16.04, TF 0.9, CUDA 7.5 and CuDnn 4. I tried TF 0.11 but was dealing with other issues so I went back to 0.9. It does seem that the decode results are being generated from valid articles, but the reference file and decode file indicies have NO correlation.
If anyone can provide any help or direction, it would be greatly appreciated. Otherwise, should I figure anything out, I will post here.
A few final questions. Regarding the vocab file referenced. Does it at all need to be sorted by word frequency at all? I never performed anything along these lines when generating it and just wasn't sure if this would throw something off as well.
Finally, I made the assumption in generating the data that the training data articles should be broken down into smaller batches. I separated out the articles into multiple files of 100 articles each. These were then named data-0, data-1, etc. I assume this was a correct assumption on my part? I also kept all the vocab in one file which has not seemed to throw any errors.
Are the above assumptions correct as well?
Below are some ref and decode results which you can see are quite odd and seem to have no correlation.
DECODE:
output=Wild Boy Goes About How I Can't Be Really Go For Love
output=State Department defends the campaign of Iran
output=John Deere sails profit - Business Insider
output=to roll for the Perseid meteor shower
output=Man in New York City in Germany
REFERENCE:
output=Battle Chasers: Nightwar Combines Joe Mad's Stellar Art With Solid RPG Gameplay
output=Obama Meets a Goal That Could Literally Destroy America
output=WOW! 10 stunning photos of presidents daughter Zahra Buhari
output=Koko the gorilla jams out on bass with Flea from Red Hot Chili Peppers
output=Brenham police officer refused service at McDonald's
Going to answer this one myself. Seems the issue here was the lack of training data. In the end I did end up sorting my vocab file, however it seems this is not necessary. The reason this was done, was to allow the end user to limit the vocab words to something like 200k words should they wish.
The biggest reason for the problems above were simply the lack of data. When I ran the training in the original post, I was working with 40k+ articles. I thought this was enough but clearly it wasn't and this was even more evident when I got deeper into the code and gained a better understanding as to what was going on. In the end I increased the number of articles to over 1.3 million, I trained for about a week and a half on my 980GTX and got the average loss to about 1.6 to 2.2 I was seeing MUCH better results.
I am learning this as I go, but I stopped at the above average loss because some reading I performed stated that when you perform "eval" against your "test" data, your average loss should be close to what you are seeing in training. This helps to determine whether you are getting close to over-fitting when these are far apart. Again take this with a grain of salt, as I am learning but it seems to make sense logically to me.
One last note that I learned the hard way is this. Make sure you upgrade to the latest 0.11 Tensorflow version. I originally trained using 0.9 but when I went to figure out how to export the model for tensorflow, I found that there was no export.py file in that repo. When I upgrades to 0.11, I then found that the checkpoint file structure seems to have changed in 0.11 and I needed to take another 2 weeks to train. So I would recommend just upgrading as they have resolved a number of the problems I was seeing during the RC. I still did have to set the is_tuple=false but that aside, all has worked out well. Hope this helps someone.