I'm participating in small data analysis competition in our school.
We use Fitbit wearable devices, which is loaned to each participants by host of contest.
For 2 months during the contest, they walk and sleep with this small device 24/7,
allow it to gather data about participant's walk count with heart rate(bpm), etc.
and we need to solve some problems based on these participants' data
like, example,
show the relations between rainy days and participants' working out rate using the chart,
i think purpose of problem is,
because of rain, lot of participants are expected to be at home.
can you show some cause and effect numerically?
i'm now studying python library numpy, pandas with ipython notebook.
but still i have no idea about solving these problems..
could you recommend some projects or sites use for references? i really eager to win this competition.:(
and lastly, sorry for my poor English.
Thank you.
that's a fun project. I'm working on something kind of similar.
Here's what you need to do:
Learn the fitbit API and stream the data from the fitbit accelerometer and gyroscope. If you can combine this with heart rate data, great. The more types of data you have, the more effective your algorithm will be. You can store this data in a simple csv file (streaming the accel/gyro data at 50Hz is recommended). Or setup a web server and store it in a database for easy access
Learn how to use pandas and scikit learn
[optional but recommended]: Learn matplotlib so you can graph you data and get a feel for how it looks
Load the data into pandas and create features on the data - notably using 1-2 second sliding window analysis with 50% overlap. Good features include (for all three Accel X, Y, Z): max, min, standard deviation, root mean square, root sum square and tilt. Polynomials will help.
Since this is a supervised classification problem, you will need to create some labelled data - so do this manually (state 1 = rainy day, state 2 = non-rainy day) and then train a classification algorithm. I would recommend a random forest
Test using unlabeled data - don't forget to use cross validation
Voila, you now have a highly accurate model and will win the competition. Plus you've learned about a bunch of really cool Python and machine learning stuff.
For more tutorials on how all this stuff works, I'd highly recommend the Kaggle tutorial projects
BONUS: If you want to take it to a new level, you can start adding smoothers on top of your classifier, for example by using a Hidden Markov Model as explained in this talk
BONUS 2: Go get a PhD in Human Activity Recognition.
Related
Machine learning and deep learning model I know how to code but not know how to do it in real time like stock market and get real time predicted value.
This is a simple toy example. Let us assume that for stock market prediction, you use a fixed time window, e.g the past 10 candlesticks. You query the data, pass it to the model and make your prediction (for example predicting the next 5 candle sticks). When the 11th candle stick appears in real time, query again the previous 10 candle sticks [2nd-11th] and make predictions for the next 5. You can adjust overlapping predictions by averaging them for instance.
This scheme would work for machine learning and deep learning. Of course that is just a toy example, hence there are more sophisticated of doing so. So get creative and read a bunch of research papers to see how it is done in the industry.
I am using Tensorflow Probability to build a VAE which includes image pixels as well as some other variables. The output of the VAE:
tfp.distributions.Independent(tfp.distributions.Bernoulli(logits), 2, name="decoder-dist")
I am trying to understand how to form other conditional distributions based on this which I can use with the inference methods (MCMC or VI). Say the output above was P(A,B,C | Z), how would I take that distribution to form a posterior P(A|B, C, Z) that I could perform inference on? I have been trying to read through the docs but I am having some trouble grasping them.
The answer to your question depends very much on the nature of the joint model within which you'd like to do the conditioning. Much has been written about the topic, and in short it's a very hard problem in general :). Without knowing a bit more about the particulars of your problem, it's near impossible to recommend a useful generic inference procedure. However, we do have a bunch of examples (scripts and jupyter/colab notebooks) in the TFP repo here: https://github.com/tensorflow/probability/tree/master/tensorflow_probability/examples
In particular, there's
The Hierarchical Linear Model example, which is a sort of Rosetta stone showing how to do posterior inference using Hamiltonian Monte Carlo (an MCMC technique) in TFP, R, and Stan,
The Linear Mixed Effects Model example, showing how you might use VI to solve a standard LME problem,
among many others. You can click the "Run in Google Colab" link at the top of any of these notebooks to open and run on them on https://colab.research.google.com.
Please feel free, also, to reach out on to us via email at tfprobability#tensorflow.org. This is a public Google Group where users can engage with the team that builds TFP directly. If you provide us some more info there on what you'd like to do, we're happy to provide guidance on modeling and inference with TFP.
Hope this is gives at least a start in the right direction!
I'm looking into designing a software platform that will aid linguists and anthropologists in their study of previously unstudied languages. Statistics show that around 1,000 languages exist that have never been studied by a person outside of their respective speaker groups.
My goal is to utilize TensorFlow to make a platform that will allow linguists to study and document these languages more efficiently, and to help them create written systems for the ones that don't have a written system already. One of their current methods of accomplishing such a task is three-fold: 1) Record a native speaker conversing in the language, 2) Listening to that recording and trying to transcribe it into the IPA, 3) From the phonetics, analyzing the phonemics and phonotactics of the language to eventually create a written system for the speaker.
My proposed platform would cut that research time down from a minimum of a year to a maximum of six months. Before I start, I have some questions...
What would be required to train TensorFlow to transcribe live audio into the IPA? Has this already been done? and if so, how would I utilize a previous solution for this project? Is a project like this even possible with TensorFlow? if not, what would you recommend using instead?
My apologies for the magnitude of this question. I don't have much experience in the realm of machine learning, as I am just beginning the research process for this project. Any help is appreciated!
I guess I will take a first shot at answering this. Since the question is pretty general, my answer will have to be pretty general as well.
What would be required. At the very least you would have to have a large dataset of pre-transcribed data. Ideally a large amount of spoken language audio mapped to characters in the phonetic alphabet, so the system could learn the sound of individual characters rather than whole transcribed words. If such a dataset doesn't exist, a less granular dataset could be used, mapping single words to their transcriptions. Then you would need a model, that is the actual neural network architecture implemented in code. And lastly you would need some computing resources. This is not something you can train casually, you would either have to buy some time in a cloud based machine learning framework (like Google Cloud ML) or build a fairly expensive machine to train at home.
Has this been done? I don't know. I don't think so. There have been published papers reporting various degrees of success at training systems to transcribe speech. Here is one, for example, http://deeplearning.stanford.edu/lexfree/lexfree.pdf It seems that since the alphabet you want to transcribe to is specifically designed to capture the way words sound rather than just write down the words you might have more success at training such a model.
Is it possible with TensorFlow. Yes, most likely. TensorFlow is well suited for implementing most modern deep learning architectures. Unless you end up designing some really weird and very original model for this purpose, TensorFlow should work just fine.
Edit: after some thought in part 1, you would have to use a dataset mapping spoken words to their transcriptions, since I expect that the same sound pronounced separately would be different from when the same sound is used in a word.
This has actually been done, albeit in PyTorch, by a group at CMU: https://github.com/xinjli/allosaurus
having read this article about a guy who uses tensorflow to sort cucumber into nine different classes I was wondering if this type of process could be applied to a large number of classes. My idea would be to use it to identify Lego parts.
At the moment, a site like Bricklink describes more than 40,000 different parts so it would be a bit different than the cucumber example but I am wondering if it sounds suitable. There is no easy way to get hundreds of pictures for each part but does the following process sound feasible :
take pictures of a part ;
try to identify the part using tensorflow ;
if it does not identify the correct part, take more pictures and feed the neural network with them ;
go on with the next part.
That way, each time we encounter a new piece we "teach" the network its reference so that it can better be recognized the next time. Like that and after hundreds of iterations monitored by a human, could we imagine tensorflow to be able to recognize the parts? At least the most common ones?
My question might sound stupid but I am not into neural networks so any advice is welcome. At the moment I have not found any way to identify a lego part based on pictures and this "cucumber example" sounds promising so I am looking for some feedback.
Thanks.
You can read about the work of Jacques Mattheij, he actually uses a customized version of Xception1 running on https://keras.io/.
The introduction is Sorting 2 Metric Tons of Lego.
In Sorting 2 Tons of Lego, The software Side you can read:
The hard challenge to deal with next was to get a training set large
enough to make working with 1000+ classes possible. At first this
seemed like an insurmountable problem. I could not figure out how to
make enough images and to label them by hand in acceptable time, even
the most optimistic calculations had me working for 6 months or longer
full-time in order to make a data set that would allow the machine to
work with many classes of parts rather than just a couple.
In the end the solution was staring me in the face for at least a week
before I finally clued in: it doesn’t matter. All that matters is that
the machine labels its own images most of the time and then all I need
to do is correct its mistakes. As it gets better there will be fewer
mistakes. This very rapidly expanded the number of training images.
The first day I managed to hand-label about 500 parts. The next day
the machine added 2000 more, with about half of those labeled wrong.
The resulting 2500 parts where the basis for the next round of
training 3 days later, which resulted in 4000 more parts, 90% of which
were labeled right! So I only had to correct some 400 parts, rinse,
repeat… So, by the end of two weeks there was a dataset of 20K images,
all labeled correctly.
This is far from enough, some classes are severely under-represented
so I need to increase the number of images for those, perhaps I’ll
just run a single batch consisting of nothing but those parts through
the machine. No need for corrections, they’ll all be labeled
identically.
A recent update is Sorting 2 Tons of Lego, Many Questions, Results.
1CHOLLET, François. Xception: Deep Learning with Depthwise Separable Convolutions. arXiv preprint arXiv:1610.02357, 2016.
I have started this using IBM Watson's Visual Recognition.
I had six different bricks to be recognized on the transport belt background.
I am actually thinking about tensorflow, since I can have it running locally.
The codelab : TensorFlow for Poets, describes almost exactly what you want to achieve,
For a demo of the Watson version:
https://www.ibm.com/developerworks/community/blogs/ibmandgoogle/entry/Lego_bricks_recognition_with_Watosn_lego_and_raspberry_pi?lang=en
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 months ago.
Improve this question
The TensorFlow home page describes its purpose as 'a software library for numerical computation'. Looking through the sample problems it looks like a problem is always formulated as follows:
Input
Model parameters
Desired output
Given some training data for 1) and 3), 2) can be computed.
I can see how this can be used to create bots, self-driving cars, image classifiers etc.
Given the broad definition of 'numerical computation', am I missing a class of other problems this can be used for? Can this be used for, say, more classical numerical computations such as the airflow around an aircraft or deformation of a structure under stress? Do you have any examples of how these classical problems would have to be formulated to fit the form above?
A nice discussion on what artificial neural networks could do, the fact that our brain is a neural network might imply that eventually an artificial neural network will be able to to the same tasks.
Some more examples of artificial neural networks used today: music creation, image based location, page rank, google voice, stock trade predictions, nasa star classifiaction, traffic management
Some fields i know of but do not have a good reference for:
optical quantum mechanics test set-up generator
medical diagnosis, reference only about safety
The Sharp LogiCook microwave oven, wiki, nasa mention
I think there are many millions of "problems" that can be solved with an ANN, deciding on the data representation (input,output) will be a challenge for some of these. some useful and useless examples i have been thinking about:
home thermostat that learns your wishes with certain weather types.
bakery production prediction
recognize go-stones on a board and map their locations
personal activity guesser and turn on appropriate device.
recognize person based on mouse movement
Given the right data and network these examples will work.
Dad has a pc controlling the heating system back home, i trained a network based on his 10years of heating data (outside temp, inside temp, humidity etc.) unfortunately i am not allowed to hook it up.
My aunt and uncle have a bakery, based on 6years of sales data i trained a network predicting how many breads and buns they should make. It showed me how important the correct inputs are. first i used the day of the year but when i switched to day of the week i saw a 15% increase in accuracy.
Currently i am working on a network that will detect a go board in a given image and map all 361 locations telling me if there is a black, white or no stone present.
Two examples that showed me how much information can be stored in a single neuron and of different ways to represent data:
Image example, neuron example (unfortunately you have to train both examples yourself so give them a little time.)
On to your example airflow around an aircraft.
I know none to nothing about airflow calculations and my try would be a really huge 3D input layer where you can "draw" an airplane and the direction and speed of the airflow.
It might work but it will require a tremendous amount of computation power, somebody knowing more about this specific topic probably knows a more abstract way of representing the data resulting in a more manageable network.
This nasa paper talks about a neural network for calculating airflow around a wing. Unfortunately i do not understand what kind of input they use, maybe it is more clear to you.