while_loop in tensorflow returns type error - tensorflow

I am confused why the following code returns this error message:
Traceback (most recent call last):
File "/Users/Desktop/TestPython/tftest.py", line 46, in <module>
main(sys.argv[1:])
File "/Users/Desktop/TestPython/tftest.py", line 35, in main
result = tf.while_loop(Cond_f2, Body_f1, loop_vars=loopvars)
File "/Users/Desktop/HPC_LIB/TENSORFLOW/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2518, in while_loop
result = context.BuildLoop(cond, body, loop_vars, shape_invariants)
File "/Users/Desktop/HPC_LIB/TENSORFLOW/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2356, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)
File "/Users/Desktop/HPC_LIB/TENSORFLOW/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2292, in _BuildLoop
c = ops.convert_to_tensor(pred(*packed_vars))
File "/Users/Desktop/TestPython/tftest.py", line 18, in Cond_f2
boln = tf.less(tf.cast(tf.constant(ind), dtype=tf.int32), tf.cast(tf.constant(N), dtype=tf.int32))
File "/Users/Desktop/HPC_LIB/TENSORFLOW/lib/python2.7/site-packages/tensorflow/python/framework/constant_op.py", line 163, in constant
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape))
File "/Users/Desktop/HPC_LIB/TENSORFLOW/lib/python2.7/site-packages/tensorflow/python/framework/tensor_util.py", line 353, in make_tensor_proto
_AssertCompatible(values, dtype)
File "/Users/Desktop/HPC_LIB/TENSORFLOW/lib/python2.7/site-packages/tensorflow/python/framework/tensor_util.py", line 287, in _AssertCompatible
raise TypeError("List of Tensors when single Tensor expected")
TypeError: List of Tensors when single Tensor expected
I would appreciate if someone could help me fix this error. Thanks!
from math import *
import numpy as np
import sys
import tensorflow as tf
def Body_f1(n, ind, N, T):
# Compute trace
a = tf.trace(tf.random_normal(0.0, 1.0, (n, n)))
# Update trace
a = tf.cast(a, dtype=T.dtype)
T = tf.scatter_update(T, ind, a)
# Update index
ind = ind + 1
return n, ind, N, T
def Cond_f2(n, ind, N, T):
boln = tf.less(tf.cast(tf.constant(ind), dtype=tf.int32), tf.cast(tf.constant(N), dtype=tf.int32))
return boln
def main(argv):
# Open tensorflow session
sess = tf.Session()
# Parameters
N = 10
T = tf.zeros((N), dtype=tf.float64)
n = 4
ind = 0
# While loop
loopvars = [n, ind, N, T]
result = tf.while_loop(Cond_f2, Body_f1, loop_vars=loopvars, shape_invariants=None, \
parallel_iterations=1, back_prop=False, swap_memory=False, name=None)
trace = result[3]
trace = sess.run(trace)
print trace
print 'Done!'
# Close tensorflow session
if session==None:
sess.close()
if __name__ == "__main__":
main(sys.argv[1:])
Update: I have added the full error message. I am not sure why I get this error message. Does loop_vars expect a single tensor and not a list of tensors? I hope not.

tf.constant expects a non-Tensor value, like a Python list or a numpy array. You can get the same error by iterating tf.constant, as in tf.constant(tf.constant(5.)). Removing those calls fixes that first error. It's a very poor error message, so I would encourage you to file a bug on Github.
It also looks like the arguments to random_normal are a bit mixed up; keyword arguments are good for avoiding issues like that:
tf.random_normal(mean=0.0, stddev=1.0, shape=(n, n))
Finally scatter_update expects a variable. It looks like a TensorArray may be what you're looking for here (or one of the higher level looping constructs which use a TensorArray implicitly).

Related

SentenceTransformers throwing KeyError on pandas Series

I'm using the following simplified code:
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
where sentences is a pandas Series, containing the sentences I want to transform.
and then I'm getting the following error Traceback
embeddings = model.encode(sentences)
File "/anaconda/envs/topics/lib/python3.8/site-packages/sentence_transformers/SentenceTransformer.py", line 157, in encode
sentences_sorted = [sentences[idx] for idx in length_sorted_idx]
File "/anaconda/envs/topics/lib/python3.8/site-packages/sentence_transformers/SentenceTransformer.py", line 157, in <listcomp>
sentences_sorted = [sentences[idx] for idx in length_sorted_idx]
File "/anaconda/envs/topics/lib/python3.8/site-packages/pandas/core/series.py", line 942, in
__getitem__
return self._get_value(key)
File "/anaconda/envs/topics/lib/python3.8/site-packages/pandas/core/series.py", line 1051, in
_get_value
loc = self.index.get_loc(label)
File "/anaconda/envs/topics/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 3363, in get_loc
raise KeyError(key) from err
KeyError: 144
The actual solution was to convert the pandas Series to a numpy array:
sentences_array = sentences.to_numpy()
The Embeddings will give in the form of array or Tensor form so use the following codes to solve this issue
embeddings = model.encode(sentences, convert_to_tensor=True)
or
embeddings = model.encode(sentences, convert_to_numpy=True)

What do I need in order to save animation videos from matplotlib in mp3 format?

I am using python3.8 on Linux Mint 19.3, and I am trying to save an animation created by a cellular automata model in matplotlib. My actual code for the model is private, but it uses the same code for saving the animation as the code shown below, which is a slight modification of one of the examples shown in the official matplotlib documentation:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
fig, ax = plt.subplots()
def f(x, y):
return np.sin(x) + np.cos(y)
x = np.linspace(0, 2 * np.pi, 120)
y = np.linspace(0, 2 * np.pi, 100).reshape(-1, 1)
fig, ax = plt.subplots()
ims = []
for i in range(60):
x += np.pi / 15.
y += np.pi / 20.
im = ax.imshow(f(x, y), animated=True)
if i == 0:
ax.imshow(f(x, y)) # show an initial one first
ims.append([im])
ani = animation.ArtistAnimation(fig, ims, interval=50, blit=True,
repeat_delay=1000)
# To save the animation, use e.g.
#
# ani.save("movie.mp4")
#
# or
#
writer = animation.FFMpegWriter(fps=15, metadata=dict(artist='Me'), bitrate=1800)
ani.save("movie.mp3", writer=writer)
When executed, the code produces this error:
MovieWriter stderr:
Output file #0 does not contain any stream
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/matplotlib/animation.py", line 234, in saving
yield self
File "/usr/local/lib/python3.8/dist-packages/matplotlib/animation.py", line 1093, in save
writer.grab_frame(**savefig_kwargs)
File "/usr/local/lib/python3.8/dist-packages/matplotlib/animation.py", line 351, in grab_frame
self.fig.savefig(self._proc.stdin, format=self.frame_format,
File "/usr/local/lib/python3.8/dist-packages/matplotlib/figure.py", line 3046, in savefig
self.canvas.print_figure(fname, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/matplotlib/backend_bases.py", line 2319, in print_figure
result = print_method(
File "/usr/local/lib/python3.8/dist-packages/matplotlib/backend_bases.py", line 1648, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/matplotlib/_api/deprecation.py", line 415, in wrapper
return func(*inner_args, **inner_kwargs)
File "/usr/local/lib/python3.8/dist-packages/matplotlib/backends/backend_agg.py", line 486, in print_raw
fh.write(renderer.buffer_rgba())
BrokenPipeError: [Errno 32] Broken pipe
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/justin/animation_test.py", line 36, in <module>
ani.save("movie.mp3", writer=writer)
File "/usr/local/lib/python3.8/dist-packages/matplotlib/animation.py", line 1093, in save
writer.grab_frame(**savefig_kwargs)
File "/usr/lib/python3.8/contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.8/dist-packages/matplotlib/animation.py", line 236, in saving
self.finish()
File "/usr/local/lib/python3.8/dist-packages/matplotlib/animation.py", line 342, in finish
self._cleanup() # Inline _cleanup() once cleanup() is removed.
File "/usr/local/lib/python3.8/dist-packages/matplotlib/animation.py", line 373, in _cleanup
raise subprocess.CalledProcessError(
subprocess.CalledProcessError: Command '['ffmpeg', '-f', 'rawvideo', '-vcodec', 'rawvideo', '-s', '640x480', '-pix_fmt', 'rgba', '-r', '15', '-loglevel', 'error', '-i', 'pipe:', '-vcodec', 'h264', '-pix_fmt', 'yuv420p', '-b', '1800k', '-metadata', 'artist=Me', '-y', 'movie.mp3']' returned non-zero exit status 1.
I have looked at posts on similar queries concerning matplotlib animations, but none have specifically included the error Output file #0 does not contain any stream. I have little experience with ffmpeg, so I am wondering what might be missing.

Custom keras loss function using scipy

I would like to implement a custom loss function, and I am using tensorflow with keras backend.
In my loss function for each training sample (2D matrix of size (2048x192) I would like to add a bandpassed version of the corresponding training sample as a constant (non-trainable) value.
I implemented bandpass filter based on How to implement band-pass Butterworth filter with Scipy.signal.butter
from scipy.signal import butter, sosfiltfilt
def butter_bandpass_sos(lowcut, highcut, fs, order=5):
nyq = 0.5 * fs
low = lowcut / nyq
high = highcut / nyq
sos = butter(order, [low, high], analog=False, btype='band', output='sos')
return sos
def butter_bandpass_filter_sos(data, lowcut, highcut, fs, order=5):
sos = butter_bandpass_sos(lowcut, highcut, fs, order=order)
y = sosfiltfilt(sos, data)
return y
and for the loss function based on Adding a constant to Loss function in Tensorflow, I implemented:
from tensorflow.python.keras import backend
import tensorflow as tf
lowcut = 2.5e6
highcut = 7.5e6
order = 5
fs = 40e6
def HP_func(mat):
for i in range(0, 192):
RF_ch = mat[:, i]
y = butter_bandpass_filter_sos(RF_ch, lowcut, highcut, fs, order=order)
mat_band_sos[:, i] = y
return mat_band_sos
def my_custom_loss_HF(y_true,y_pred):
HF_mat = HP_func(y_true)
loss = backend.sqrt(tf.keras.losses.mean_squared_error(y_true, y_pred)) + HF_mat
return loss
I have three branches and therefore three losses:
model.compile(loss=['mean_squared_error', my_custom_loss_HF,'mean_squared_error'],
loss_weights=[1.0, 1.0, 1.0],
optimizer='Adam',
metrics=['mae', rmse])
but I am getting this error:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-19-01efc0717223>", line 6, in <module>
metrics=['mae', rmse])
File "/home/z003zpjj/venv/lib/python3.6/site-packages/tensorflow/python/training/tracking/base.py", line 457, in _method_wrapper
result = method(self, *args, **kwargs)
File "/home/z003zpjj/venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 337, in compile
self._compile_weights_loss_and_weighted_metrics()
File "/home/z003zpjj/venv/lib/python3.6/site-packages/tensorflow/python/training/tracking/base.py", line 457, in _method_wrapper
result = method(self, *args, **kwargs)
File "/home/z003zpjj/venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 1710, in _compile_weights_loss_and_weighted_metrics
self.total_loss = self._prepare_total_loss(masks)
File "/home/z003zpjj/venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 1770, in _prepare_total_loss
per_sample_losses = loss_fn.call(y_true, y_pred)
File "/home/z003zpjj/venv/lib/python3.6/site-packages/tensorflow/python/keras/losses.py", line 215, in call
return self.fn(y_true, y_pred, **self._fn_kwargs)
File "<ipython-input-18-a4f1cf924d3f>", line 3, in my_custom_loss_HF
HF_mat = HP_fun(y_true)
File "<ipython-input-17-74a2f0e736b9>", line 19, in HP_fun
y = butter_bandpass_filter_sos(RF_ch, lowcut, highcut, fs, order=order)
File "<ipython-input-2-4e34aa35b4cd>", line 69, in butter_bandpass_filter_sos
y = sosfiltfilt(sos, data)
File "/home/z003zpjj/venv/lib/python3.6/site-packages/scipy/signal/signaltools.py", line 4131, in sosfiltfilt
x = _validate_x(x)
File "/home/z003zpjj/venv/lib/python3.6/site-packages/scipy/signal/signaltools.py", line 3926, in _validate_x
raise ValueError('x must be at least 1D')
ValueError: x must be at least 1D
How can I use the scipy function in my loss function?
you need to rewrite the function to work with tensors, otherwise it doesnt have a gradient, the network cant be trained.
you can use only functions like keras backend.
your function has even a for, forget the logic in loss function.

How to avoid set_index on a pre-sorted DataFrame constructed with from_delayed?

I am trying to get the expression, 'df.resample('1T', how='mean').sum()' to work in Dask but, running into an issue where it seems like Dask needs me to explicitly set_index on the DataFrame before performing resample. I get an error as below...
>>> c.gather(df).compute()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/distributed/client.py", line 1508, in gather
asynchronous=asynchronous)
File "/usr/local/lib/python2.7/site-packages/distributed/client.py", line 615, in sync
return sync(self.loop, func, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/distributed/utils.py", line 253, in sync
six.reraise(*error[0])
File "/usr/local/lib/python2.7/site-packages/distributed/utils.py", line 238, in f
result[0] = yield make_coro()
File "/usr/local/lib64/python2.7/site-packages/tornado/gen.py", line 1055, in run
value = future.result()
File "/usr/local/lib64/python2.7/site-packages/tornado/concurrent.py", line 238, in result
raise_exc_info(self._exc_info)
File "/usr/local/lib64/python2.7/site-packages/tornado/gen.py", line 1063, in run
yielded = self.gen.throw(*exc_info)
File "/usr/local/lib/python2.7/site-packages/distributed/client.py", line 1385, in _gather
traceback)
File "/usr/local/lib/python2.7/site-packages/dask/dataframe/core.py", line 1633, in resample
return _resample(self, rule, how=how, closed=closed, label=label)
File "/usr/local/lib/python2.7/site-packages/dask/dataframe/tseries/resample.py", line 33, in _resample
return getattr(resampler, how)()
File "/usr/local/lib/python2.7/site-packages/dask/dataframe/tseries/resample.py", line 151, in mean
return self._agg('mean')
File "/usr/local/lib/python2.7/site-packages/dask/dataframe/tseries/resample.py", line 126, in _agg
meta_r = self.obj._meta_nonempty.resample(self._rule, **self._kwargs)
File "/usr/local/lib64/python2.7/site-packages/pandas/core/generic.py", line 7104, in resample
base=base, key=on, level=level)
File "/usr/local/lib64/python2.7/site-packages/pandas/core/resample.py", line 1148, in resample
return tg._get_resampler(obj, kind=kind)
File "/usr/local/lib64/python2.7/site-packages/pandas/core/resample.py", line 1276, in _get_resampler
"but got an instance of %r" % type(ax).__name__)
TypeError: Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex, but got an instance of 'Index'
Below is the python code which I am using. Since the pandas DFs being returned by my delayed objects were already timestamp indexed, my expectation was for Dask to infer/construct an index from those DFs' timestamp indices instead of me having to explicitly set one. Although, I am unsure how an explicit set_index can be called in this case (what are the arguments to be passed?). Setting a pd.DatetimeIndex on the meta dataframe (commented line as below) works. Is constructing the index by hand and feeding it to meta the only realistic way to do this? Am I missing something?
#! /usr/bin/env python
# Start dask scheduler and workers
# dask-scheduler &
# dask-worker --nthreads 1 --nprocs 6 --memory-limit 3GB localhost:8786 --local-directory /dev/shm &
from dask.distributed import Client
from dask.delayed import delayed
import pandas as pd
import numpy as np
import dask.dataframe as dd
import time
c = Client('127.0.0.1:8786')
def load(epoch):
# 1525132800 - 1/5
# 1527811200 - 1/6
num_ts=100
idx = []
for ts in range(0, 86400, 15):
idx.append(epoch + ts)
d = np.random.rand(86400/15, num_ts)
ts = []
for i in range(0, num_ts):
# tsname = "ts_%s_%s" % (i, epoch)
tsname = "ts_%s" % (i)
ts.append(tsname)
gts.append(tsname)
res = pd.DataFrame(index=idx, data=d, columns=ts, dtype=np.float64)
res.index = pd.to_datetime(arg=res.index, unit='s')
return res
gts = []
load(1525132800)
print time.time()
i = pd.DatetimeIndex(start=1525132800, freq='15S', end=1527811185, dtype='datetime64[s]')
# meta = pd.DataFrame(index=i, data=[], columns=gts, dtype=np.float64)
meta = pd.DataFrame(index=[], data=[], columns=gts, dtype=np.float64)
dfs = [delayed(load)(fn) for fn in range(1525132800, 1527811200, 86400)]
print time.time()
df = dd.from_delayed(dfs, meta, 'sorted')
print time.time()
df.npartitions
df.divisions
print time.time()
df = c.submit(dd.DataFrame.resample, df, rule='1T', how='mean')
print time.time()
#df = c.submit(dd.DataFrame.sum, df, axis=1)
print time.time()
c.gather(df).compute()
print time.time()
#c.gather(df).visualize(filename='/usr/share/nginx/html/svg/df4.svg')
Dask uses the meta of a data-frame to infer the data types before computing any of the chunks of data. In your case, your chunks contain datetime indexes, but the meta doesn't. The meta should be a zero-length version of the data:
meta = pd.DataFrame(index=i[:0], data=[], columns=gts, dtype=np.float64)

Keras flask API not giving me output

I am very new to flask. I developed a document classification model using CNN model in Keras in Python3. Below is the code i am using for app.py file in windows machine.
I got the code example from here and improvised it to suit my needs
import os
from flask import jsonify
from flask import request
from flask import Flask
import numpy as np
from keras.models import model_from_json
from keras.models import load_model
from keras.preprocessing.text import Tokenizer, text_to_word_sequence
from keras.preprocessing.sequence import pad_sequences
#star Flask application
app = Flask(__name__)
path = 'C:/Users/user/Model/'
json_file = open(path+'/model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
keras_model_loaded = model_from_json(loaded_model_json)
keras_model_loaded.load_weights(path+'/model.h5')
print('Model loaded...')
def preprocess_text(text,num_max = 1000,max_review_length = 100):
tok = Tokenizer(num_words=num_max)
tok.fit_on_texts(texts)
cnn_texts_seq = tok.texts_to_sequences(texts)
cnn_texts_mat = sequence.pad_sequences(cnn_texts_seq,maxlen=max_review_length)
return cnn_texts_mat
# URL that we'll use to make predictions using get and post
#app.route('/predict',methods=['GET','POST'])
def predict():
try:
text = request.args.get('text')
x = preprocess_text(text)
y = int(np.round(keras_model_loaded.predict(x)))
#print(y)
return jsonify({'prediction': str(y)})
except:
response = jsonify({'error': 'problem predicting'})
response.status_code = 400
return response
if __name__ == "__main__":
port = int(os.environ.get('PORT', 5000))
# Run locally
app.run(host='0.0.0.0', port=port)
In my windows machine i navigate to the path in the console where i have saved app.py file and execute the command py -3.6 app.py
When i go the url http://localhost:5000/predict and type in browser
http://localhost:5000/predict?text=I've had my Fire HD 8 two weeks now and I love it. This tablet is a great value. We are Prime Members and that is where this tablet SHINES.
it does not give me any class as output, but instead i get this as output {"error":"problem predicting"}.
Any help on how to fix this?
Edit: I removed the try except block in the predict function. Below is how predict function looks like
def predict():
text = request.args.get('text')
x = preprocess_text(text)
y = int(np.round(keras_model_loaded.predict(x)))
return jsonify({'prediction': str(y)})
Now i am getting exception. error message is
[2018-05-28 18:33:59,008] ERROR in app: Exception on /predict [GET]
Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\flask\app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\flask\app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\flask\app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\flask\_compat.py", line 35, in reraise
raise value
File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\flask\app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\flask\app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "app.py", line 59, in predict
x = preprocess_text(text)
File "app.py", line 37, in preprocess_text
tok.fit_on_texts(texts)
NameError: name 'texts' is not defined
127.0.0.1 - - [28/May/2018 18:33:59] "GET /predict?text=I%27ve%20had%20my%20Fire%20HD%208%20two%20weeks%20now%20and%20I%20love%20it.%20This%20tablet%20is%20a%20great%20value.%20We%20are%20Prime%20Members%20and%20that%20is%20where%20this%20tablet%20SHINES. HTTP/1.1" 500 -
Edit2: I have edited code to
def preprocess_text(texts,num_max = 1000,max_review_length = 100):
tok = Tokenizer(num_words=num_max)
tok.fit_on_texts(texts)
cnn_texts_seq = tok.texts_to_sequences(texts)
cnn_texts_mat = pad_sequences(cnn_texts_seq,maxlen=max_review_length)
return cnn_texts_mat
# URL that we'll use to make predictions using get and post
#app.route('/predict',methods=['GET','POST'])
def predict():
text = request.args.get('text')
x = preprocess_text(text)
y = keras_model_loaded.predict(x)
return jsonify({'prediction': str(y)})
and now the error message is
packages\tensorflow\python\framework\ops.py", line 3402, in _as_graph_element_locked
raise ValueError("Tensor %s is not an element of this graph." % obj)
ValueError: Tensor Tensor("output/Sigmoid:0", shape=(?, 1), dtype=float32) is not an element of this graph.
127.0.0.1 - - [28/May/2018 19:39:11] "GET /predict?text=I%27ve%20had%20my%20Fire%20HD%208%20two%20weeks%20now%20and%20I%20love%20it.%20This%20tablet%20is%20a%20great%20value.%20We%20are%20Prime%20Members%20and%20that%20is%20where%20this%20tablet%20SHINES. HTTP/1.1" 500 -
I am unable to understand and debug this error. Not sure what this means. Can anyone help me understand this error and suggest a solution for this?
Also, i am unable to post the entire error message in stackoverflow as most of the chunk in my question appears to be code.
Thanks!!
Now it is what I guessed. There is a problem when using cross-threads with Flask and Tensorflow. Here is a fix for it:
import tensorflow as tf
# ...
graph = tf.get_default_graph()
def predict():
text = request.args.get('text')
x = preprocess_text(text)
with graph.as_default():
y = int(np.round(keras_model_loaded.predict(x)))
return jsonify({'prediction': str(y)})
by wrapping the prediction to forcefully use the default graph.