If I do the following, the code works:
df_sma = btalib.sma(df['price'].loc[symbol],period=5).df
df.loc[[symbol],'sma'] = df_sma.values
However, if I just add .iloc[-10:] to it:
df_sma = btalib.sma(df['price'].loc[symbol].iloc[-10:],period=5).df
df.loc[[symbol],'sma'].iloc[-10:] = df_sma.values
I get this error:
ValueError: could not broadcast input array from shape (10,1) into shape (10,)
What exactly changed, and why does it throw that error?
Related
I have an array of shape, (4478,)
print(Customer_Reaction_Array.shape) -> (4478,)
I want to load/copy the array, Customer_Reaction_Array into another array of shape, (4478, 96)
y=np.zeros([len(Customer_Reaction_Array),Customer_Reaction_Array[0].shape[0]]) print (y.shape) -> (4478, 96)
It can load the array up to the index, y[455,:] = Customer_Reaction_Array[455]
Then I got the error, ValueError: could not broadcast input array from shape (0) into shape (96)
My code is:
for i in range(len(Customer_Reaction_Array)):
y[i,:] = Customer_Reaction_Array[i]
Can anyone help me to solve the problem?
I am not sure I understand your problem completely but if you want to copy an existing array along a new axis, then use
initial_dimension = Customer_Reaction_Array.shape
second_dimension = 96
y = np.repeat(Customer_Reaction_Array, second_dimension).reshape(*initial_dimension, second_dimension)
you can check y.T to get the transpose of that array in case you need it oriented that way
I am following the view on youtube here,
it shows the code
text_1 = tf.ragged.constant(
[['who','is', 'Goerge', 'Washington'],
['What', 'is', 'the', 'weather', 'tomorrow']])
text_2 = tf.ragged.constant(['goodnight'])
text = tf.concat(text_1, text_2)
print(text)
But it raises the ValueError as follows:
ValueError: Tensor conversion requested dtype int32 for Tensor with
dtype string:
What is wrong please?
In the docs it says that concat takes a list of tensors and an axis as arguments, like so
text = tf.concat([text_1, text_2], axis=-1)
This raises a ValueError because the shapes of the tensors don't match. Please specify what you want to achieve.
Edit:
In the video you linked to there appears to be a syntax error in this line: text_2 = tf.ragged.constant(['goodnight']]). (The brackets don't match.) It should really be text_2 = tf.ragged.constant([['goodnight']]), which achieves the result printed below the operation in the video.
The tf.concat requires one list of tensors and a axis. And the text_2 should have the same dimentions of text_1
text_1 = tf.ragged.constant(
[['who', 'is', 'Goerge', 'Washington'],
['What', 'is', 'the', 'weather', 'tomorrow']])
text_2 = tf.ragged.constant([['goodnight']])
text = tf.concat([text_1, text_2], 0)
print(text)
First some of my code:
...
fc_1 = layers.Dense(256, activation='relu')(drop_reshape)
bi_LSTM_2 = layers.Lambda(buildGruLayer)(fc_1)
...
def buildGruLayer(inputs):
gru_cells = []
gru_cells.append(tf.contrib.rnn.GRUCell(256))
gru_cells.append(tf.contrib.rnn.GRUCell(128))
gru_layers = tf.keras.layers.StackedRNNCells(gru_cells)
inputs = tf.unstack(inputs, axis=1)
outputs, _ = tf.contrib.rnn.static_rnn(
gru_layers,
inputs,
dtype='float32')
return outputs
Error I am getting when running static_rnn is:
raise TypeError("Cannot convert value %r to a TensorFlow DType." % type_value)
TypeError: Cannot convert value None to a TensorFlow DType.
The shape that comes into the Layer in the shape (64,238,256).
Anyone has a clue what the problem could be. I already googled the error but couldn't find anything. Any help is much appreciated.
If anyone still needs a solution to this. Its because you need to specify the dtype for the GRUCell, e.g tf.float32
Its default is None which in the documentation defaults to the first dimension of your input data (i.e batch dimension, which in tensorflow is a ? or None)
Check the dtype argument from :
https://www.tensorflow.org/api_docs/python/tf/compat/v1/nn/rnn_cell/GRUCell
The following code uses a tf.while_loop(...) for computations of a dynamic length.
outputs_tensor_array = tf.TensorArray(tf.float32,
size=0,
clear_after_read=False,
infer_shape=False,
dynamic_size = True,
element_shape[self.batch_size, self.size])
initial_args = [outputs_tensor_array, 0]
outputs, *_ = tf.while_loop(lambda out, idx, *_ : idx < max_len,
func,
initial_args + additional_args,
parallel_iterations = 32,
swap_memory = True)
outputs = outputs.stack()
I'm wondering if its possible to enforce a size, or atleast make that size be None in order to enforce a size constraint and enable further computations down the graph. The current shape is [?, batch, hidden_size]
tensor.set_shape will refine the static shape information and throw an error if it is incompatible with current static shape information (in the TensorArray.stack() case it will let you set any value for the zeroth dimension's static shape information).
tf.reshape can also be useful for asserting/filling in shape information, although it's not perfect. It will only throw an error if the size of the Tensor is wrong when the graph is executed (and may otherwise hide a shape error downstream).
More complicated, but you can also set_shape for the static shape information and then use tf.Assert with tf.shape to check the Tensor's shape when the graph is executed.
I want to get the extension of image files to invoke different image decoder, and I found there's a function called tf.string_split in tensorflow r0.11.
filename_queue = tf.train.string_input_producer(filenames, shuffle=shuffle)
reader = tf.WholeFileReader()
img_src, img_bytes = reader.read(filename_queue)
split_result = tf.string_split(img_src, '.')
But when I run it, I get this error:
ValueError: Shape must be rank 1 but is rank 0 for 'StringSplit' (op: 'StringSplit') with input shapes: [], [].
I think it may caused by the shape inference of img_src. I try to use img_src.set_shape([1,]) to fix it, but it seems not work, I get this error:
ValueError: Shapes () and (1,) are not compatible
Also, I can't get the shape of img_src using
tf.Print(split_result, [tf.shape(img_src)],'img_src shape=')
The result is img_src shape=[]. But if I use the following code:
tf.Print(split_result, [img_src],'img_src=')
The result is img_src=test_img/test1.png. Am I doing something wrong?
Just pack img_src into a tensor.
split_result = tf.string_split([img_src], '.')