How deepmind reduce the calculation for Q values for Atari games? - sql

We know q-learning need tons of calculations:
The huge amount of states in q-learning calculation
For a gaming AI, it needs much more q-values than OX game, GO game.
How this is to be done to calculate these large amounts of q-values?
Thanks.

MCTS didn't actually reduce any calculation for q-values.
For a very simple Atari gaming AI, it needs much more than 3^(19x19) q values.
Check the deep q network, that solved your problem.
We could represent our Q-function with a neural network, that takes
the state (four game screens) and action as input and outputs the
corresponding Q-value. Alternatively we could take only game screens
as input and output the Q-value for each possible action. This
approach has the advantage, that if we want to perform a Q-value
update or pick the action with highest Q-value, we only have to do one
forward pass through the network and have all Q-values for all actions
immediately available.
https://neuro.cs.ut.ee/demystifying-deep-reinforcement-learning/

Related

How to interpret "Value Loss" chart in TensorBoard?

I have a target-finding, obstacle-avoiding helicopter in Unity Machine Learning Agents. Looking at the TensorBoard for my training, I'm trying to get a feel for how to interpret the "Losses/Value Loss".
I've googled many articles on ML Loss, like this one, but I can't seem to get an intuitive understanding yet of what it all means for my little helicopter and possible changes I should implement, if any. (The helicopter is rewarded by getting closer and again for reaching the target, and punished by getting further or colliding. It measures a variety of things like relative speed, relative target position, ray sensors and so on, and it does basically work in target-finding, whereas more complicated maze type obstacles have not been tested or trained on yet. It's using 3 layers.) Thanks!
In reinforcement learning and specifically regarding actor/critic algorithms, value loss is the difference (or an average of many such differences) between the learning algorithm's expectation of a state's value and the empirically observed value of that state.
What is a state's value? A state's value is, in short, how much reward you can expect given that you start in that state. Immediate reward contributes completely to this amount. Reward that can possibly occur but not immediately contribute less, with more distant occurrences contributing less and less. We call this reduction in contribution to value a "discount", or we say that these rewards are "discounted".
Expected value is how much the critic part of the algorithm predicts the value to be. In the case of a critic implemented as a neural network, it's the output of the neural network with the state as its input.
Empirically observed value is the amount you get when you add up the rewards that you actually got when you left that state, plus any rewards (discounted by some amount) you got immediately after that for some number of steps (we'll say after these steps you ended up on state X), and (perhaps, depending on implementation) plus some discounted amount based on the value of state X.
In short, the smaller it is, the better it got at predicting how well it is going to perform. This doesn't mean that it gets better at playing - after all, one can be terrible at a game yet be accurate at predicting that they will lose and when they will lose if they learn to choose actions that will make them lose quickly!

How to do real time machine learning and deep learning?

Machine learning and deep learning model I know how to code but not know how to do it in real time like stock market and get real time predicted value.
This is a simple toy example. Let us assume that for stock market prediction, you use a fixed time window, e.g the past 10 candlesticks. You query the data, pass it to the model and make your prediction (for example predicting the next 5 candle sticks). When the 11th candle stick appears in real time, query again the previous 10 candle sticks [2nd-11th] and make predictions for the next 5. You can adjust overlapping predictions by averaging them for instance.
This scheme would work for machine learning and deep learning. Of course that is just a toy example, hence there are more sophisticated of doing so. So get creative and read a bunch of research papers to see how it is done in the industry.

Particles passing through each other

I am writing a code to simulate particle movement. (currently 2D soon 3D hopefully)
The thing is, if I use a relatively large timestep particles end up passing through each other.
Do you have any suggestion that would allow me to correct that without using a really small step?
(it is in C++ if that makes much difference).
The use of timestep to advance the clock introduces model artifacts which can destroy the model validity, as is happening in your case. Use discrete event scheduling instead. This paper from Winter Simulation Conference 2005 describes how to implement movement in a discrete event framework. Your model will not only be more accurate, it will probably run much faster as well.
So you will have to do some sort of collision detection to see if two objects would collide.
Depending on your data structure the detection could take many forms. If you just have a list of points you would have to check all against each other in N^2 each step for the particle (adding the movement vector to create a larger spacial foot print). This could be done by the GJK algorithm.
Using some spacial data structure could reduce the complexity by only running the GJK on a pruned set of particles, i.e. no need to check if they impossible could overlap.

analysis Fitbit walking and sleeping data

I'm participating in small data analysis competition in our school.
We use Fitbit wearable devices, which is loaned to each participants by host of contest.
For 2 months during the contest, they walk and sleep with this small device 24/7,
allow it to gather data about participant's walk count with heart rate(bpm), etc.
and we need to solve some problems based on these participants' data
like, example,
show the relations between rainy days and participants' working out rate using the chart,
i think purpose of problem is,
because of rain, lot of participants are expected to be at home.
can you show some cause and effect numerically?
i'm now studying python library numpy, pandas with ipython notebook.
but still i have no idea about solving these problems..
could you recommend some projects or sites use for references? i really eager to win this competition.:(
and lastly, sorry for my poor English.
Thank you.
that's a fun project. I'm working on something kind of similar.
Here's what you need to do:
Learn the fitbit API and stream the data from the fitbit accelerometer and gyroscope. If you can combine this with heart rate data, great. The more types of data you have, the more effective your algorithm will be. You can store this data in a simple csv file (streaming the accel/gyro data at 50Hz is recommended). Or setup a web server and store it in a database for easy access
Learn how to use pandas and scikit learn
[optional but recommended]: Learn matplotlib so you can graph you data and get a feel for how it looks
Load the data into pandas and create features on the data - notably using 1-2 second sliding window analysis with 50% overlap. Good features include (for all three Accel X, Y, Z): max, min, standard deviation, root mean square, root sum square and tilt. Polynomials will help.
Since this is a supervised classification problem, you will need to create some labelled data - so do this manually (state 1 = rainy day, state 2 = non-rainy day) and then train a classification algorithm. I would recommend a random forest
Test using unlabeled data - don't forget to use cross validation
Voila, you now have a highly accurate model and will win the competition. Plus you've learned about a bunch of really cool Python and machine learning stuff.
For more tutorials on how all this stuff works, I'd highly recommend the Kaggle tutorial projects
BONUS: If you want to take it to a new level, you can start adding smoothers on top of your classifier, for example by using a Hidden Markov Model as explained in this talk
BONUS 2: Go get a PhD in Human Activity Recognition.

Automatic feature extraction from chess board positions

I am working on a project where I take a chess board position (FEN string converted to binary) & it's evaluation score and feed it to a neural network. My aim is to make the neural network differentiate between good and bad positions.
How I encode the position : There are 12 unique pieces in chess i.e pawn, rook, knight, bishop, queen and king for white as well as black. I encode each piece using 4 bits with 0000 denoting an empty square. So the 64 squares are encoded into 256 bits and I use 6 more bits to denote game state like whose turn it is to move, king-castle status, etc.
Problem : Since the input space for chess positions is neither smooth nor uni-modal (one small change in the board position can result in a huge change in the evaluation score), the neural network doesn't learn well. Now, the next logical thing to somehow extract useful features (like material difference, center control, etc) and feed it to the network.
I do not want to hand pick the features as I want the network to learn everything by itself. Therefore I am thinking of extracting features automatically using autoencoders. Is there any better way to accomplish this?
Summary : What is the best way to automatically extract features from a chess board position so that it can be fed into a neural network?
UPDATE : To generate training data, I have modified Stockfish to dump it's evaluation process into a log file. So every new move(position) it considers is written to a file as an FEN string along with it's eval score
Neural networks can give an approximation of any function. The only consideration to do is the dimensionality of the search space, which give constraints to the amount of data you have to get a good approximation.
For a supervised network (you use autoencoders, then I think you use some variant of backpropagation), it's difficult for me to immagine how you think to do the trainig using single positions because you need similar positions in your training set. Maybe your approach is different, but I'm convinced that second strategy (using features) is more promising. I think using positions require a huge amount of data training to get good results.
For features take a look here, and to the classical work of Shannon.
I taked also useful informations from the source code of Crafty.
But you have to extract these informations from the FEN string.
Autoencoders are a way to give a reduction of data (good because increase performances). It seems to be better the use of Pincipal Component Analysys, as reported here.
I hope this can help you.