Initializing tensorflow Variable with an array larger than 2GB - tensorflow

I am trying to initialize a tensorflow Variable with pre-trained word2vec embeddings.
I have the following code:
import tensorflow as tf
from gensim import models
model = models.Word2Vec.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True)
X = model.syn0
embeddings = tf.Variable(tf.random_uniform(X.shape, minval=-0.1, maxval=0.1), trainable=False)
sess.run(tf.initialize_all_variables())
sess.run(embeddings.assign(X))
And I am receiving the following error:
ValueError: Cannot create an Operation with a NodeDef larger than 2GB.
The array (X) I am trying to assign is of shape (3000000, 300) and its size is 3.6GB.
I am getting the same error if I try tf.convert_to_tensor(X) as well.
I know that it fails due to the fact that the array is larger than 2GB. However, I do not know how to assign an array larger than 2GB to a tensorflow Variable

It seems like the only option is to use a placeholder. The cleanest way I can find is to initialize to a placeholder directly:
X_init = tf.placeholder(tf.float32, shape=(3000000, 300))
X = tf.Variable(X_init)
# The rest of the setup...
sess.run(tf.initialize_all_variables(), feed_dict={X_init: model.syn0})

The easiest solution is to feed_dict'ing it into a placeholder node that you use to tf.assign to the variable.
X = tf.Variable([0.0])
place = tf.placeholder(tf.float32, shape=(3000000, 300))
set_x = X.assign(place)
# set up your session here....
sess.run(set_x, feed_dict={place: model.syn0})
As Joshua Little noted in a separate answer, you can also use it in the initializer:
X = tf.Variable(place) # place as defined above
...
init = tf.initialize_all_variables()
... create sess ...
sess.run(init, feed_dict={place: model.syn0})

try this:
import tensorflow as tf
from gensim import models
model = models.KeyedVectors.load_word2vec_format('./GoogleNews-vectors-negative300.bin', binary=True)
X = model.syn0
embeddings = tf.Variable(tf.random_uniform(X.shape, minval=-0.1, maxval=0.1), trainable=False)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
embeddings.load(model.syn0, sess)

Related

how to initialize a Variable tensor for the weight matrix in a keras model?

I am trying to use tensors variables to use as weights in a keras layer..
I know that I can use numpy arrays instead but the reason I want to feed tensors is that I want my weight matrices to be of the type SparseTensor.
This is a small example that I have coded so far:
def model_keras(seed, new_hidden_size_list=None):
number_of_layers = 1
hidden_size = 512
hidden_size_list = [hidden_size] * number_of_layers
input_size = 784
output_size = 10
if new_hidden_size_list is not None:
hidden_size_list = new_hidden_size_list
weight_input = tf.Variable(tf.random.normal([784, 512], mean=0.0, stddev=1.0))
bias_input = tf.Variable(tf.random.normal([512], mean=0.0, stddev=1.0))
weight_output = tf.Variable(tf.random.normal([512, 10], mean=0.0, stddev=1.0))
# This gives me an error when trying to use in kernel_initializer and bias_initializer in the keras model
weight_initializer_input = tf.initializers.variables([weight_input])
bias_initializer_input = tf.initializers.variables([bias_input])
weight_initializer_output = tf.initializers.variables([weight_output])
# This works fine
#weight_initializer_input = tf.initializers.lecun_uniform(seed=None)
#bias_initializer_input = tf.initializers.lecun_uniform(seed=None)
#weight_initializer_output = tf.initializers.lecun_uniform(seed=None)
print(weight_initializer_input, bias_initializer_input, weight_initializer_output)
model = keras.models.Sequential()
for index in range(number_of_layers):
if index == 0:
# input layer
model.add(keras.layers.Dense(hidden_size_list[index], activation=nn.selu, use_bias=True,
kernel_initializer=weight_initializer_input,
bias_initializer=bias_initializer_input,
input_shape=(input_size,)))
else:
model.add(keras.layers.Dense(hidden_size_list[index], activation=nn.selu, use_bias=True,
kernel_initializer=weight_initializer_hidden,
bias_initializer=bias_initializer_hidden))
# output layer
model.add(keras.layers.Dense(output_size, use_bias=False, kernel_initializer=weight_initializer_output))
model.add(keras.layers.Activation(nn.softmax))
return model
I am using tensorflow 1.15.
Any idea how one can use custom (user defined) Tensor Variables as initializer instead of pre-set schemes (e.g. Glorot, Truncated Normal etc). Another approach that I could take is to explicitly define the computations instead of using the keras.Layer.
Many thanks
Your code works after enabling eager execution.
import tensorflow as tf
tf.compat.v1.enable_eager_execution()
Add this at the top of you file.
See this for working code.

How to use tensorflow.distributions in a custom loss function for a keras model

For a Deep learning model I defined with tf2.0 keras I need to write a custom loss function.
As this will depend on stuff like entropy and normal log_prob, it would really make my life less misrable if I could use tf.distributions.Normal and use two model outpus as mu and sigma respectivly.
However, as soon as I put this into my loss function, I get the Keras error that no gradient is defined for this function.
ValueError: An operation has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.
I tried encalpsulating the call in a tf.contrib.eager.Variable as I read somewhere. Did not help.
What is the trick to use them? I don't see a reason from the fundamental arcitecture why I should not be able to use them in a mixed form.
#this is just an example which does not really give a meaningful result.
import tensorflow as tf
import tensorflow.keras as K
import numpy as np
def custom_loss_fkt(extra_output):
def loss(y_true,y_pred):
dist = tf.distributions.Normal(loc=y_pred,scale=extra_output)
d = dist.entropy()
return K.backend.mean(d)
return loss
input_node = K.layers.Input(shape=(1,))
dense = K.layers.Dense(8,activation='relu')(input_node)
#dense = K.layers.Dense(4,activation='relu')(dense)
out1 = K.layers.Dense(4,activation='linear')(dense)
out2 = K.layers.Dense(4,activation ='linear')(dense)
model = K.Model(inputs = input_node, outputs = [out1,out2])
model.compile(optimizer = 'adam', loss = [custom_loss_fkt(out2),custom_loss_fkt(out1)])
model.summary()
x = np.zeros((1,1))
y1 = np.array([[0.,0.1,0.2,0.3]])
y2 = np.array([[0.1,0.1,0.1,0.1]])
model.fit(x,[y1,y2],epochs=1000,verbose=0)
print(model.predict(x))

tensorflow CNN with complex features and labels?

I recently found a paper where they used a CNN with complex 2D-feature-maps as an input. However, there Network also outputs a complex output vector. They used Keras with tensorflow backend.
Here is the link: https://arxiv.org/pdf/1802.04479.pdf
I asked myself if it is possible to build complex Deep Neural Networks like CNNs with tensorflow. As far as i know it is not possible. Did i missed something?
There are other related questions which adresses the same problem with no answer: Complex convolution in tensorflow
when building a realy senseless model with real number in and output all works correct:
import tensorflow as tf
from numpy import random, empty
n = 10
feature_vec_real = random.rand(1,n)
X_real = tf.placeholder(tf.float64,feature_vec_real.shape)
def model(x):
out = tf.layers.dense(
inputs=x,
units=2
)
return out
model_output = model(X_real)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
output = sess.run(model_output,feed_dict={X_real:feature_vec_real})
but when using complex inputs:
import tensorflow as tf
from numpy import random, empty
n = 10
feature_vec_complex = empty(shape=(1,n),dtype=complex)
feature_vec_complex.real = random.rand(1,n)
feature_vec_complex.imag = random.rand(1,n)
X_complex = tf.placeholder(tf.complex128,feature_vec_complex.shape)
def complex_model(x):
out = tf.layers.dense(
inputs=x,
units=2
)
return out
model_output = complex_model(X_complex)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
output = sess.run(model_output,feed_dict={X_complex:feature_vec_complex})
i get the following error:
ValueError: An initializer for variable dense_7/kernel of is required
So what is the correct way to initialize the weights of the dense kernel when having complex inputs?
I know there is the possibility to handle complex numbers as two different layers in the network. But this is not what i want.
Thanks for your help!

How to set the weight of tf.slim with a numpy array

I would like to show my example below:
x = tf.placeholder(dtype=...)
a = numpy.asarray([784, 10])
z = slim.fully_connected(x, 10, weights_initializer=?)
I have tried weights_initializer = lambda x1:a, it reports the error: TypeError: () got an unexpected keyword argument 'dtype'
I also found another post here:https://github.com/tensorflow/tensorflow/issues/4016
However, I still don't know the answer. Thank you very much.
Sorry, I don't really understand what you're trying to do.
If your fully connected layer has 10 hidden neurons then your initializer must have the shape (input_shape, 10), what you're giving is a (2,) shape. Secondly, to initialize weights with a constant matrix you should use tf.constant_initializer(..) function.
Are you trying to do the following: (you can change the init function used with numpy)
import tensorflow as tf
import numpy as np
slim = tf.contrib.slim
input_size = ?
x = tf.placeholder(dtype=tf.float32, shape=[input_size])
a = np.random.normal((input_size, 10))
z = slim.fully_connected(x, 10,
weights_initializer=tf.constant_initializer(a))
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)

ValueError: Tensor Tensor(...) is not an element of this graph. When using global variable keras model

I'm running a web server using flask and the error comes up when I try to use vgg16, which is the global variable for keras' pre-trained VGG16 model. I have no idea why this error rises or whether it has anything to do with the Tensorflow backend.
Here is my code:
vgg16 = VGG16(weights='imagenet', include_top=True)
def getVGG16Prediction(img_path):
global vgg16
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
pred = vgg16.predict(x)
return x, sort(decode_predictions(pred, top=3)[0])
#app.route("/uploadMultipleImages", methods=["POST"])
def uploadMultipleImages():
uploaded_files = request.files.getlist("file[]")
for file in uploaded_files:
path = os.path.join(STATIC_PATH, file.filename)
pInput, result = getVGG16Prediction(path)
Here is the full error:
Any comment or suggestion is greatly appreciated. Thank you.
Take a look at avital's answer on this github issue. Quoting the relevant part here:
Right after loading or constructing your model, save the TensorFlow graph:
graph = tf.get_default_graph()
In the other thread (or perhaps in an asynchronous event handler), do:
global graph
with graph.as_default():
(... do inference here ...)
I modified this a bit and stored the graph in my app's config object instead of making it a global.
The TensorFlow documentation for get_default_graph explains why this is necessary:
NOTE: The default graph is a property of the current thread. If you create a new thread, and wish to use the default graph in that thread, you must explicitly add a with g.as_default(): in that thread's function.