Training with tensorflow and colab - tensorflow

I'm working on a project that involves temperature reading and LEDs. I need to train the connection between a certain temperature and the LED, but only with tensorflow and coding, not the real LED. For example, if the temperature is 37 degrees, the LED is ON and if the temperature is 39 the LED is OFF. What can I do to train this kind of connection between this variables?

The first step is to generate a csv dataset with the true value labels for the temperatures.
For example:
21,1
37,1
39,0
50,0
Then split this dataset into training and testing. A good split is 80%(training) to 20%(testing). Then use the information to train your tensorflow model which will have a single output which will either be a 1 or a 0.
Once you have trained your model to fit this data you can use the predict function to determine if the LED should be going on or off.

Related

YOLOv4 loss too high

I am using YOLOv4-tiny for a custom dataset of 26 classes that I collected from Open Images Dataset. The dataset is almost balanced(850 images per class but different number of bounding boxes). When I used YOLOv4-tiny to train on just 3 classes the loss was near 0.5, it was fairly accurate. But for 26 classes as soon as the loss goes below 2 the model starts to overfit. The prediction are also very inaccurate.
I have tried to change the parameters like the learning rate, the momentum and the size but whatever I do the models becomes worse then before. Using regular YOLOv4 model rather then YOLO-tiny does not help either. How can I bring the loss further down?
Have you tried training with mAP? You can take a subset of your training set and make it the validation set. This can be done in the same way you made your training and test set. Then, you can run darknet.exe detector train data/obj.data yolo-obj.cfg yolov4.conv.137 -map. This will keep track of the loss in your validation set. When the error in the validation say goes up, this is the time to stop training and prevent overfitting (this is called: early stopping).
You need to run the training for (classes*2000)iterations. However, for the best scores, you need to train your model for at least 6000 iterations (also known as max_batches). Also please remember if you are using a b&w image, change the channels=3 to channels=1. You can stop your training once the avg loss becomes something like this: 0.XXXX.
Here's my mAP graph for 6000 iterations that ran for 6.2 hours:
avg loss with 6000 max_batches.
Moreover, you can follow this FAQ documentation here by Stéphane Charette.

What is the best machine Learning model to train time series data? [ Not forecasting ]

I have a set of time series data belong to 5 different classes. [ EEG data (1 data point for 1 second). And those data have been divided in to 30-40 second epochs and each epoch is classified into different classes like A,B,C,D,E]. So basically I have around 13500 labelled data.
[10,5,48,75,1,...,22,45,8] = A
[26,47,8,77,4,...,56,88,96] = B like wise
What I did was I directly fed these data to a Neural Network and trained the model. But the accuracy was very low around 40%. What want to know is rather than just using a neural network, what is the best model to train time series data?
In case of time series data some architectures are performing quite well :
Recurrent Neural Network (with LSTM, GRU or BERT for example), designed to train on sequence of data
This could be an example : https://arxiv.org/pdf/1812.04818.pdf
How this works inside : link
Example implementation in keras : link
, you should then find/design your own architecture
TCN, it uses causal and dilated convolution in order to capture time series data
Example : https://arxiv.org/pdf/1905.03806.pdf
How this works : link
Implementation in keras : link
I would personnaly go for those types of architecture, well suited for time series data.

Multiple target (large) neural network regression using Python

My situation is I have a excel file with 747 nodes (as input) each with a value (imagine 747 columns with floats) and I have an output of 741 values/columns with again floats. These are basically inputs and outputs of a geological simulation. So one row has 747(input)+741(output) = 1488 floats which is one dataset (from one simulation). I have 4 such datasets (rows) to train a neural network such that when I test them on 3 test datasets (747 columns) I get the output of 741 columns. This is just a simple run to get the skeleton of the neural network going before further modifications.
I have come across the Multi-Target Regression example of NYCTaxi (https://github.com/zeahmed/DeepLearningWithMLdotNet/tree/master/NYCTaxiMultiOutputRegression) but I can seem to wrap my head around it.
This is the training set (Input till and including column 'ABS', rest is output):
https://docs.google.com/spreadsheets/d/12TKVbGExt9KcK5RQKTexrToVo8qA5YfeItSaa7E2QdU/edit?usp=sharing
This is the test set:
https://docs.google.com/spreadsheets/d/1-RjyZsdguucCSOr9QTdTp2ehJBqWCr5yz1-aRjQ_4zo/edit?usp=sharing
This is the test Output (To validate) : https://docs.google.com/spreadsheets/d/10O_6711CEpJ4DN1w-kCmW01NikjFVZTDmNRuqO3U_6A/edit?usp=sharing
Any guidance/tips would be well appreciated. TIA!
We can use an AutoEncoder for this task. An AutoEncoder takes in the data, compresses it into a latent representation. Now, this representation vector is used to construct the output variable.
So, you can feed the 747-dimensional vector to the model and generate another 747-dimensional vector which is the output. After proper training, the model will be able to generate the target variables for a given set of inputs.

Using ssd_inception_v2 to train on different resolution

The dataset contains images of different sizes.
The pretrained weights are trained on 300x300 resolution.
I am training on widerface dataset where objects are as small as 15x15.
Q1. I want to train with 800x800 resolution do i need to resize all the images manually or this will be done by Tensorflow automatically ?
I am using the following command to train:
python3 /opt/github/models/research/object_detection/legacy/train.py --logtostderr --train_dir=/opt/github/object_detection_retraining/wider_face_checkpoint/ --pipeline_config_path=/opt/github/object_detection_retraining/models/ssd_inception_v2_coco_2018_01_28/pipeline.config
Q2. I also tried training it using the model_main.py but after 1000 iterations it is evaluating the dataset with each iteration.
I am using the following command to train:
python3 /opt/github/models/research/object_detection/model_main.py --num_train_steps=200000 --logtostderr --model_dir=/opt/github/object_detection_retraining/wider_face_checkpoint/ --pipeline_config_path=/opt/github/object_detection_retraining/models/ssd_inception_v2_coco_2018_01_28/pipeline.config
Q3. Also if you can suggest any model i should use for real time face detection apart from mobilenet and inception, please suggest.
Thanks.
Q1. No you do not need to resize manually. See this detailed answer.
Q2. By 1000 iterations you meant steps right? (An iteration counts as a complete cycle of the dataset.) Usually the model performed evaluation after a certain amount of time, e.g. 10 minutes. So in every 10 minutes, the checkpoints are saved and an evaluation of the model on evaluation set is performed.
Q3. SSD models with mobilenet is one of the fast detectors, apart from that you can try YOLO models for real time detection

How can I build a recurrent neural network to deal with time series to get a single continuous value?

I want to build a RNN with tensorflow, and it can convert a time series to a single continuous value. For example, the input time series x is [x1,x2,x3,x4,...,xt]=[1,2,3,4,...,100], and the corresponding output y is 98.5, just like score the time series. I found this figure in Yoshua Bengio's deep learning book, and this RNN model is what I want. Is there any useful reading material can help me to solve the problem?
see the rnn section in book << TensorFlow for machine intelligence >> which descibes different functionalities of rnn models with sample code