How can I get reproducible results with keras? I followed these steps but I am still getting different results every time I run the Jupyter notebook. I also tried setting shuffle=False when calling model.fit().
My configuration:
conda 4.3.25
keras 2.0.6 with tensorflow backend
tensorflow-gpu 1.2.1
python 3.5
windows 10
See the answer I posted at another question. The main idea is to: first, disable the GPU. And then, seed the libraries like "numpy, random, etc". To sum up, including the code below at the beginning of your code may help solve your problem.
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = ""
import numpy as np
import tensorflow as tf
import random as rn
from keras import backend as K
sd = 1
np.random.seed(sd)
rn.seed(sd)
os.environ['PYTHONHASHSEED']=str(sd)
config = tf.ConfigProto(intra_op_parallelism_threads=1,inter_op_parallelism_threads=1)
tf.set_random_seed(sd)
sess = tf.Session(graph=tf.get_default_graph(), config=config)
K.set_session(sess)
Related
I'm trying to get used to tensorboard, and I code my models using pytorch.
However when I try to see my model using the add_graph() function, I've got this:
With this as the test code:
import numpy as np
import torch
import torchvision.transforms as transforms
import torch.nn as nn
import torch.optim as optim
from torch.utils.tensorboard import SummaryWriter
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.linear = nn.Linear(2, 1)
def forward(self, x):
x = self.linear(x)
return x
writer = SummaryWriter('runs_pytorch/test')
net = Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
writer.add_graph(net, torch.zeros([4, 2], dtype=torch.float))
writer.close()
On the other hand, if I try to see a graph using TensorFlow, everything seems fine:
with this as the test code this time:
import tensorflow as tf
tf.Variable(42, name='foo')
w = tf.summary.FileWriter('runs_tensorflow/test')
w.add_graph(tf.get_default_graph())
w.flush()
w.close()
In case you are wondering, I'm using this command to start tensorboard:
tensorboard --logdir runs_pytorch
Something I noticed is that when I use it on the directory allocated for my tensorflow test, I've got the usual message with the address, but if I do the same thing with --logdir runs_pytorch I've got something more:
W1010 15:19:24.225109 15308 plugin_event_accumulator.py:294] Found more than one graph event per run, or there was a metagraph containing a graph_def, as well as one or more graph events. Overwriting the graph with the newest event.
W1010 15:19:24.226075 15308 plugin_event_accumulator.py:322] Found more than one "run metadata" event with tag step1. Overwriting it with the newest event.
I'm on windows, I tried on different browsers (chrome, firefox...).
I have tensorflow 1.14.0, torch 1.2.0, Python 3.7.3
Thank you very much for your help, it's driving me crazy!
There are two ways to solve it:
1. update PyTorch to 1.3.0 and above:
conda way:
conda install pytorch torchvision cudatoolkit=9.2 -c pytorch
pip way:
pip3 install torch==1.3.0+cu92 torchvision==0.4.1+cu92 -f https://download.pytorch.org/whl/torch_stable.html
2. install tensorboardX instead:
uninstall tensorboard:
if your tensorboard is installed by pip:
pip uninstall tensorboard
if your tensorboard is installed by anaconda:
conda uninstall tensorboard
install tensorboardX
pip install tensorboardX
when writing script,
change
from torch.utils.tensorboard import SummaryWriter
to
from tensorboardX import SummaryWriter
This might have been caused by this known problem, and it seems that it was solved in pytorch 1.3 which was realeased yesterday - check out Bug Fixes in the release notes.
I wanted to try out the embeddings provided in tensorflow-hub, the 'universal-sentence-encoder' to be specific. I tried the examples provided (https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/semantic_similarity_with_tf_hub_universal_encoder.ipynb)
and it worked fine. So I tried to do the same with 'multilingual' model but every time the multilingual model is loaded, the colab kernel fails and restarts. What is the problem and how can I get around this?
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import seaborn as sns
import tf_sentencepiece
import sentencepiece
# Import the Universal Sentence Encoder's TF Hub module
embed = hub.Module("https://tfhub.dev/google/universal-sentence-encoder-multilingual/1") // This is where the kernel dies.
print("imported model")
# Compute a representation for each message, showing various lengths supported.
word = "코끼리"
sentence = "나는 한국어로 쓰여진 문장이야."
paragraph = (
"동해물과 백두산이 마르고 닳도록. "
"하느님이 보우하사 우리나라 만세~")
messages = [word, sentence, paragraph]
# Reduce logging output.
tf.logging.set_verbosity(tf.logging.ERROR)
with tf.Session() as session:
session.run([tf.global_variables_initializer(), tf.tables_initializer()])
message_embeddings = session.run(embed(messages))
for i, message_embedding in enumerate(np.array(message_embeddings).tolist()):
print("Message: {}".format(messages[i]))
print("Embedding size: {}".format(len(message_embedding)))
message_embedding_snippet = ", ".join(
(str(x) for x in message_embedding[:3]))
print("Embedding: [{}, ...]\n".format(message_embedding_snippet))
I had similar issues with the multilingual sentence encoder. I resolved it by specifying tensorflow version to 1.14.0 and tf-sentencepiece to 0.1.83, so before running your code in colab try:
!pip3 install tensorflow==1.14.0
!pip3 install tensorflow-hub
!pip3 install sentencepiece
!pip3 install tf-sentencepiece==0.1.83
I was able to replicate your problem in colab and this solution loaded the model correctly:
It seems to be a compatibility problem between sentencepiece and tensorflow, check for updates on this issue here.
Let us know how it goes. Best of luck and I hope this helps.
EDIT: If tensorflow version 1.14.0 does not work, change it to 1.13.1. This problem should be resolved once compatibility between tensorflow and sentencepiece is figured it out.
I am trying to fit a model here but thing is that every time I fit a model my kernel dies I tried every other method but it did't worked.
I think there may be possibility of having two python versions installed but I don't know how to fix that or even verify that.
Also I am using MAC
I have tried updating reinstalling everything
#Importing libraries
import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelEncoder,OneHotEncoder,StandardScaler
from sklearn.model_selection import train_test_split,cross_val_score
from keras.layers import Dense
import keras
from sklearn.metrics import confusion_matrix,accuracy_score
from keras.wrappers.scikit_learn import KerasClassifier
#Importing Datasets
dataset=pd.read_csv('Churn_Modelling.csv')
X=dataset.iloc[:,3:13].values
y=dataset.iloc[:,13].values
#Data preprocessing
le1=LabelEncoder()
X[:,1]=le1.fit_transform(X[:,1])
le2=LabelEncoder()
X[:,2]=le2.fit_transform(X[:,2])
h1=OneHotEncoder(categorical_features=[1])
X=h1.fit_transform(X).toarray()
X=X[:,1:]
#Splitting Dataset
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=0)
#Feature Scaling
sc=StandardScaler()
X_train=sc.fit_transform(X_train)
X_test=sc.transform(X_test)
#Making ANN hidden layer
classifier=keras.models.Sequential()
classifier.add(Dense(units=6,activation="relu",kernel_initializer="uniform",input_shape=(11,)))
#Adding second hidden layer
classifier.add(Dense(units=6,activation='relu',kernel_initializer='uniform'))
#Adding output layer
classifier.add(Dense(units=1,activation='sigmoid',kernel_initializer='uniform'))
#Compiling ANN
classifier.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
Till here it works like a charm with some warnings
#Making predictions and evaluating it
classifier.fit(X_train,y_train,epochs=100,batch_size=10)
But when I execute this it shows
An error ocurred while starting the kernel
b''
Any one knows how to solve this ?
Maybe this might help you: https://github.com/spyder-ide/spyder/issues/2812
If you are using Spyder, try:
conda update setuptools
I've been trying to run keras on a CPU cluster, and for this I need to limit the number of cores used (it's a shared system). So to limit the number of cores, I landed on this answer. However, this simply doesn't work. I tried running with this basic code:
from keras.applications.vgg16 import VGG16
from keras import backend as K
import numpy as np
conf = K.tf.ConfigProto(device_count={'CPU': 1},
intra_op_parallelism_threads=2,
inter_op_parallelism_threads=2)
K.set_session(K.tf.Session(config=conf))
model = VGG16(weights='imagenet', include_top=False)
x = np.random.randn(1000, 224, 224, 3)
features = model.predict(x)
When I run this and check htop, it uses all (128) logical cores. Is this a bug in keras? Or am I doing something wrong?
Keras says that my CPU supports SSE4.1 and SSE4.2, which are not used because I didn't compile from binary. Will compiling from binary also fix the original question?
EDIT: I've found a workaround when launching the keras script from a unix machine:
taskset -c 0-23 python keras_script.py
This will run the script on the first 24 cores of the machine. It works, but it would still be nice if this was available from within keras/tensorflow.
I found this snippet of code that works for me, hope it helps:
from keras import backend as K
import tensorflow as tf
jobs = 2 # it means number of cores
config = tf.ConfigProto(intra_op_parallelism_threads=jobs,
inter_op_parallelism_threads=jobs,
allow_soft_placement=True,
device_count={'CPU': jobs})
session = tf.Session(config=config)
K.set_session(session)
Every time I run LSTM network with Keras in jupyter notebook, I got a different result, and I have googled a lot, and I have tried some different solutions, but none of they are work, here are some solutions I tried:
set numpy random seed
random_seed=2017
from numpy.random import seed
seed(random_seed)
set tensorflow random seed
from tensorflow import set_random_seed
set_random_seed(random_seed)
set build-in random seed
import random
random.seed(random_seed)
set PYTHONHASHSEED
import os
os.environ['PYTHONHASHSEED'] = '0'
add PYTHONHASHSEED in jupyter notebook kernel.json
{
"language": "python",
"display_name": "Python 3",
"env": {"PYTHONHASHSEED": "0"},
"argv": [
"python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
]
}
and the version of my env is:
Keras: 2.0.6
Tensorflow: 1.2.1
CPU or GPU: CPU
and this is my code:
model = Sequential()
model.add(LSTM(16, input_shape=(time_steps,nb_features), return_sequences=True))
model.add(LSTM(16, input_shape=(time_steps,nb_features), return_sequences=False))
model.add(Dense(8,activation='relu'))
model.add(Dense(1,activation='linear'))
model.compile(loss='mse',optimizer='adam')
The seed is definitely missing from your model definition. A detailed documentation can be found here: https://keras.io/initializers/.
In essence your layers use random variables as their basis for their parameters. Therefore you get different outputs every time.
One example:
model.add(Dense(1, activation='linear',
kernel_initializer=keras.initializers.RandomNormal(seed=1337),
bias_initializer=keras.initializers.Constant(value=0.1))
Keras themselves have a section about getting reproduceable results in their FAQ section: (https://keras.io/getting-started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development). They have the following code snippet to produce reproducable results:
import numpy as np
import tensorflow as tf
import random as rn
# The below is necessary in Python 3.2.3 onwards to
# have reproducible behavior for certain hash-based operations.
# See these references for further details:
# https://docs.python.org/3.4/using/cmdline.html#envvar-PYTHONHASHSEED
# https://github.com/fchollet/keras/issues/2280#issuecomment-306959926
import os
os.environ['PYTHONHASHSEED'] = '0'
# The below is necessary for starting Numpy generated random numbers
# in a well-defined initial state.
np.random.seed(42)
# The below is necessary for starting core Python generated random numbers
# in a well-defined state.
rn.seed(12345)
# Force TensorFlow to use single thread.
# Multiple threads are a potential source of
# non-reproducible results.
# For further details, see: https://stackoverflow.com/questions/42022950/which-seeds-have-to-be-set-where-to-realize-100-reproducibility-of-training-res
session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
from keras import backend as K
# The below tf.set_random_seed() will make random number generation
# in the TensorFlow backend have a well-defined initial state.
# For further details, see: https://www.tensorflow.org/api_docs/python/tf/set_random_seed
tf.set_random_seed(1234)
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
K.set_session(sess)
Keras + Tensorflow.
Step 1, disable GPU.
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = ""
Step 2, seed those libraries which are included in your code, say "tensorflow, numpy, random".
import tensorflow as tf
import numpy as np
import random as rn
sd = 1 # Here sd means seed.
np.random.seed(sd)
rn.seed(sd)
os.environ['PYTHONHASHSEED']=str(sd)
from keras import backend as K
config = tf.ConfigProto(intra_op_parallelism_threads=1,inter_op_parallelism_threads=1)
tf.set_random_seed(sd)
sess = tf.Session(graph=tf.get_default_graph(), config=config)
K.set_session(sess)
Make sure these two pieces of code are included at the start of your code, then the result will be reproducible.
I resolved this issue by adding os.environ['TF_DETERMINISTIC_OPS'] = '1'
Here an example:
import os
os.environ['TF_DETERMINISTIC_OPS'] = '1'
#rest of the code
#TensorFlow version 2.3.1