Framework to perform hyperparameter optimization for LSTM with meta-heuristic algorithms? - tensorflow

I need hyperparameter optimization for Lstm.I will use meta-heuristic i.e particle swarm optimization or artificial bee colony algorithms but ı have no idea how to code,ı decided to use framework.ı could not find framework that works with metaheuristic for Lstm.May you help me? I am using nasa turbofan dataset.
I am coding on google colab

Related

Deep learning for computer vision: What after MNIST stage?

I am trying to explore computer vision using deep learning techniques. I have gone through basic literature, made a NN of my own to classify digits using MNIST data(without using any library like TF,Keras etc, and in the process understood concepts like loss function, optimization, backward propagation etc), and then also explored Fashion MNIST using TF Keras.
I applied my knowledge gained so far to solve a Kaggle problem(identifying a plant type), but results are not very encouraging.
So, what should be my next step in progress? What should I do to improve my knowledge and models to solve more complex problems? What more books, literature etc should I read to move ahead of beginner stage?
You should try hyperparameter tuning, it will help improve your model performance. Feel free to surf around various articles, fine tuning your model will be the next step as you have fundamental knowledge regarding how model works.

How to set up a real-time controller with reinforcement learning algorithms

I'm trying to control an actual robot manipulator using reinforcement learning. For reinforcement learning, I'm using Google tensorflow.
To control a robot manipulator, I need my controller to have real-time capability. However, as far as I know, python and thus tensorflow is not real-time friendly. I want to control the robot at about 100 ~ 1000 Hz.
I've considered implementing my own reinforcement learning algorithm in C++, but it would be too much work, and take too much time.
Is there anyway of using Tensorflow reinforcement learning algorithms in C++? Or, is there any other way of implementing reinforcement learning algorithm to a C++ real-time controller?
Any help would be appreciated.
Sincerely,
Steve
I don't see a reason why Tensorflow is not good for real-time control since a TF model is not subject to the limitations of the Python interpreter.
In case you find that standard TF is not fast enough you can also have a look at TF-lite: https://www.tensorflow.org/lite.

How can I use Tensorflow to make cellular automata?

Knowing that Tensorflow is good for working with matrices, would I be able to use Tensorflow to create a cellular automata? And would this offer a great deal of speed over just coding it in Python?
Are there any tutorials or websites that could point me in the right direction to use Tensorflow for more general purpose computing than machine learning (for example, simulations)?
If so, could someone help point me in the right direction to the type of Tensorflow commands I would need to learn to make this program? Thanks!
A TensorFlow implementation is likely to offer an improvement in execution time, especially if executed by GPU, since CA can be executed in parallel. See: https://cs.stackexchange.com/a/320/67726.
A starting point for TensorFlow in general might be the official guide and documentation, which do go beyond just machine learning. Also available are two tutorials on non-ML examples: Mandelbrot Set, Partial Differential Equations.
While TensorFlow is usually mentioned in the context of machine learning, it is worth noting that:
TensorFlow™ is an open source software library for high performance
numerical computation. Its flexible architecture allows easy
deployment of computation across a variety of platforms (CPUs, GPUs,
TPUs), and from desktops to clusters of servers to mobile and edge
devices.
Edit: here's an implementation and a tutorial about Conway's Game of Life using TF.

Using scikit learn for Neural Networks vs Tensorflow in training

I was implementing some sample Neural networks and in most tutorials saw this statement.
Neural networks tend to work better on GPUs than on CPU.
The scikit-learn framework isn’t built for GPU optimization.
So does this statement (work better) refers solely regarding the train phase of a neural network or it includes the prediction part also. Would greatly appreciate some explanation on this.
That statement refers to the training phase. The only issue here is that you can explore the search space of feasible models in a more efficient way using a GPU so you will probably find better models in less time. However, this is only related to computational costs and not to model predictive performance.

Is there a C# example of Particle Swarm Optimization PSO using Encog 3.2?

I seriously searched hard for a Particle Swarm Optimization example using Encoq 3.2 C# version. I would be very appreciative if someone could share one. My array type is continuous.
Thanks,
Dan Hickman
Encog uses Particle Swarm (PSO) as a means of training a neural network. So it is really just another trainer that can be swapped in for backprop, rprop and the others. There is an example included:
https://github.com/encog/encog-dotnet-core/blob/master/ConsoleExamples/Examples/XOR/XORPSO.cs