TensorFlow with custom gym environment: Layer "dense_6" expects 1 input(s), but it received 2 input tensors - tensorflow

I am trying to use TF to solve a custom gym environment, all within Google Colab.
The main script is the TF "DQN Tutorial" available here.
In place of env_name = "CartPole-v0" I am using env_name = "gym_examples/GridWorld-v0", where gym_examples/GridWorld-v0 is the sample custom environment described in the gym documentation here. (That example uses gym v0.25.0 but TF requires gym <= v0.23.0, so I also had to tweak the rendering code a bit to make it work in v0.23.0.)
The environment loads fine via env = suite_gym.load(env_name), and subsequent code cells run fine as well, until the following two cells:
fc_layer_params = (100, 50)
action_tensor_spec = tensor_spec.from_spec(env.action_spec())
num_actions = action_tensor_spec.maximum - action_tensor_spec.minimum + 1
# Define a helper function to create Dense layers configured with the right
# activation and kernel initializer.
def dense_layer(num_units):
return tf.keras.layers.Dense(
num_units,
activation=tf.keras.activations.relu,
kernel_initializer=tf.keras.initializers.VarianceScaling(
scale=2.0, mode='fan_in', distribution='truncated_normal'))
# QNetwork consists of a sequence of Dense layers followed by a dense layer
# with `num_actions` units to generate one q_value per available action as
# its output.
dense_layers = [dense_layer(num_units) for num_units in fc_layer_params]
q_values_layer = tf.keras.layers.Dense(
num_actions,
activation=None,
kernel_initializer=tf.keras.initializers.RandomUniform(
minval=-0.03, maxval=0.03),
bias_initializer=tf.keras.initializers.Constant(-0.2))
q_net = sequential.Sequential(dense_layers + [q_values_layer])
optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
train_step_counter = tf.Variable(0)
agent = dqn_agent.DqnAgent(
train_env.time_step_spec(),
train_env.action_spec(),
q_network=q_net,
optimizer=optimizer,
td_errors_loss_fn=common.element_wise_squared_loss,
train_step_counter=train_step_counter)
agent.initialize()
After that cell, I get an error:
ValueError: Exception encountered when calling layer "sequential_2" (type Sequential).
Layer "dense_6" expects 1 input(s), but it received 2 input tensors. Inputs received: [<tf.Tensor: shape=(1, 2), dtype=int64, numpy=array([[2, 2]])>, <tf.Tensor: shape=(1, 2), dtype=int64, numpy=array([[3, 2]])>]
Call arguments received by layer "sequential_2" (type Sequential):
• inputs={'agent': 'tf.Tensor(shape=(1, 2), dtype=int64)', 'target': 'tf.Tensor(shape=(1, 2), dtype=int64)'}
• network_state=()
• kwargs={'step_type': 'tf.Tensor(shape=(1,), dtype=int32)', 'training': 'None'}
In call to configurable 'DqnAgent' (<class 'tf_agents.agents.dqn.dqn_agent.DqnAgent'>)
I'm too much of a TF novice to understand what's going on here. I suspect it's because the action state changed from 2 states (in CartPole) to 4 (in the custom GridWorld environment). But beyond that I cannot figure it out.

This can be solved by using an Embedding layer as your first layer. In this example (Embedding(16, 4)), 16 is the grid size (4x4), and 4 is the output dimension.
dense_layers = [dense_layer(num_units) for num_units in fc_layer_params]
For example, replacing the above line with the code below will eradicate the error.
dense_layers = [
# First layer
tf.keras.layers.Embedding(16, 4),
# Other layers
tf.keras.layers.Dense(100, activation=tf.keras.activations.relu)
]
Source and for further explanation:
https://martin-ueding.de/posts/reinforcement-learning-with-frozen-lake/

Related

Shape change error in ctc_batch_cost function with TensorFlow 2.7.0

I have some code that generates a CTC layer which no longer works in TensorFlow 2.7.0 but works in 2.6.1. The code in question which is causing the problem is:
class CTCLayer(layers.Layer):
def __init__(self, name=None):
super().__init__(name=name)
self.loss_fn = keras.backend.ctc_batch_cost
def call(self, labels, label_length, predictions): #input_length,
batch_len = tf.cast(tf.shape(labels)[0], dtype="int64")
input_length = tf.cast(tf.shape(predictions)[1], dtype="int64")
label_length = tf.cast(label_length, dtype="int64")#tf.cast(tf.shape(labels)[1], dtype="int64")
input_length = input_length * tf.ones(shape=(batch_len, 1), dtype="int64")
label_length = label_length * tf.ones(shape=(batch_len, 1), dtype="int64")
loss = self.loss_fn(y_true=labels, y_pred=predictions, input_length=input_length, label_length=label_length)#, logits_time_major=False)
self.add_loss(loss)
return predictions
and crashes when calling the ctc_batch_cost function during model building with the following error:
ValueError: Exception encountered when calling layer "CTC_LOSS" (type CTCLayer).
Traceback:
File "<ipython-input-10-0b2cf7d5ab7d>", line 16, in call *
loss = self.loss_fn(y_true=labels, y_pred=predictions, input_length=input_length, label_length=label_length)#, logits_time_major=False)
File "/usr/local/lib/python3.7/dist-packages/keras/backend.py", line 6388, in ctc_batch_cost
ctc_label_dense_to_sparse(y_true, label_length), tf.int32)
File "/usr/local/lib/python3.7/dist-packages/keras/backend.py", line 6340, in ctc_label_dense_to_sparse
range_less_than, label_lengths, initializer=init, parallel_iterations=1)
ValueError: Input tensor `CTC_LOSS/Cast_5:0` enters the loop with shape (1, 1), but has shape (1, None) after one iteration. To allow the shape to vary across iterations, use the `shape_invariants` argument of tf.while_loop to specify a less-specific shape.
Call arguments received:
• labels=tf.Tensor(shape=(None, 1), dtype=int32)
• label_length=tf.Tensor(shape=(None, 1), dtype=int32)
• predictions=tf.Tensor(shape=(None, 509, 30), dtype=float32)
I suspect the problem is easy to fix and has something to do with the fact that TensorFlow no longer performs upranking as described in the 2.7.0 release notes:
The methods Model.fit(), Model.predict(), and Model.evaluate() will no longer uprank input data of shape (batch_size,) to become (batch_size, 1). This enables Model subclasses to process scalar data in their train_step()/test_step()/predict_step() methods.
Note that this change may break certain subclassed models. You can revert back to the previous behavior by adding upranking yourself in the train_step()/test_step()/predict_step() methods, e.g. if x.shape.rank == 1: x = tf.expand_dims(x, axis=-1). Functional models as well as Sequential models built with an explicit input shape are not affected.
Any idea will be appreciated. Thanks!

Upgraded to Tensorflow 2.5 now get a Lambda Layer error when using pretrained Keras Applications Models

I followed this tutorial to build a siamese network for my problem.
I was using Tensorflow 2.4.1 and now upgraded
This code worked wonderfully before
base_cnn = resnet.ResNet50(
weights="imagenet", input_shape=target_shape + (3,), include_top=False
)
flatten = layers.Flatten()(base_cnn.output)
dense1 = layers.Dense(512, activation="relu")(flatten)
dense1 = layers.BatchNormalization()(dense1)
dense2 = layers.Dense(256, activation="relu")(dense1)
dense2 = layers.BatchNormalization()(dense2)
output = layers.Dense(256)(dense2)
embedding = Model(base_cnn.input, output, name="Embedding")
trainable = False
for layer in base_cnn.layers:
if layer.name == "conv5_block1_out":
trainable = True
layer.trainable = trainable
Now each resnet layer or mobilenet or efficient net (tried them all)
throws these errors:
WARNING:tensorflow:
The following Variables were used a Lambda layer's call (tf.nn.convolution_620), but
are not present in its tracked objects:
<tf.Variable 'stem_conv/kernel:0' shape=(3, 3, 3, 48) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.
It compiles and seems to fit.
But do we have to initialize the models somewhat differently in 2.5?
Thanks for any pointers!
I'm not sure what's the main reason for your issue as it's not reproducible generally. But here are some notes about that warning message. The traceback shown in your question is not from ResNet but from EfficientNet.
Now, we know that the Lambda layer exists so that arbitrary expressions can be used as a Layer when constructing Sequential and Functional API models. Lambda layers are best suited for simple operations or quick experimentation. While it is possible to use Variables with Lambda layers, this practice is discouraged as it can easily lead to bugs. For example:
import tensorflow as tf
x_input = tf.range(12.).numpy().reshape(-1, 4)
weights = tf.Variable(tf.random.normal((4, 2)), name='w')
bias = tf.ones((1, 2), name='b')
# lambda custom layer
mylayer1 = tf.keras.layers.Lambda(lambda x: tf.add(tf.matmul(x, weights),
bias), name='lambda1')
mylayer1(x_input)
WARNING:tensorflow:
The following Variables were used a Lambda layer's call (lambda1), but
are not present in its tracked objects:
<tf.Variable 'w:0' shape=(4, 2) dtype=float32, numpy=
array([[-0.753139 , -1.1668463 ],
[-1.3709341 , 0.8887151 ],
[ 0.3157893 , 0.01245957],
[-1.3878908 , -0.38395467]], dtype=float32)>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.
<tf.Tensor: shape=(3, 2), dtype=float32, numpy=
array([[ -3.903028 , 0.7617702],
[-16.687727 , -1.8367348],
[-29.472424 , -4.43524 ]], dtype=float32)>
It's because the mylayer1 layer doesn't trace the tf.Variables directly and so that those parameter won't appear in mylayer1.trainable_weights.
mylayer1.trainable_weights
[]
In general, Lambda layers can be convenient for simple stateless computation, but anything more complex should use a subclass Layer instead. From your traceback, it seems like there can be such a possible scenario with the step_conv layer.
for layer in EfficientNetB0(weights=None).layers:
if layer.name == 'stem_conv':
print(layer)
<tensorflow.python.keras.layers.convolutional.Conv2D object..
Quick surveying on source code of tf.compat.v1.nn.conv2d, lead to a lambda expression that might be the cause.
Here there is no need to revert back to TF2.4.1. I would always recommend try with latest version because it addressed many of the performance issues and new features.
I was able to execute above code without any issues in TF2.5.
import tensorflow as tf
print(tf.__version__)
from tensorflow.keras.applications import ResNet50
from tensorflow.keras import layers, Model
img_width, img_height = 224, 224
target_shape = (img_width, img_height, 3)
base_cnn = ResNet50(
weights="imagenet", input_shape=target_shape, include_top=False
)
flatten = layers.Flatten()(base_cnn.output)
dense1 = layers.Dense(512, activation="relu")(flatten)
dense1 = layers.BatchNormalization()(dense1)
dense2 = layers.Dense(256, activation="relu")(dense1)
dense2 = layers.BatchNormalization()(dense2)
output = layers.Dense(256)(dense2)
embedding = Model(base_cnn.input, output, name="Embedding")
trainable = False
for layer in base_cnn.layers:
if layer.name == "conv5_block1_out":
trainable = True
layer.trainable = trainable
Output:
2.5.0
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5
94773248/94765736 [==============================] - 1s 0us/step
As per #Olli, Restarting and clearing the session the kernel has resolved the problem.
pip install tensorflow==2.3.0 , worked for me instead of tf 2.5
I was facing the issue related to using Lambda layer

What is the correct way of replacing the first convolution layer of a TensorFlow model (for adapting the number of input channels)?

I am trying to replace the first convolution layer of a TensorFlow model in order to adapted to a different number of input channels. I want to use some pre-trained weights on Imagenet to speed up the training of a model that instead takes 4 input channels, rather than the RGB channels of ImageNet.
In order to do that I want to replace the first convolution layer of the pre-trained model with a convolution layer that goes from 4 to the number of channels of the second convolution (64). The reason that I want to do this is that to me it seems a more natural way to use the ImageNet weights than adding a conv layer before the pre-trained model which goes from 4 input channels to 3 channels and then just use the pre-trained model as it is.
I tried to use the fuctional API to select just a part of the model, but it returns an error about a discontinuity in the graph between the input layer and the rest of the network.I also tried some other ways, but quite similar to the one posted and I still get the same error every time. Chopping off the end of the model seems much easier than doing the same with the initial layers, since every layer keeps a reference to the one before through the graph.
idx_first_layer_to_copy = 2 if isinstance(pretrained_model.layers[0], tf.keras.layers.InputLayer) else 1
first_layer_to_copy = pretrained_model.layers[idx_first_layer_to_copy]
compatible_part_of_pretrained_model = tf.keras.models.Model(
inputs=first_layer_to_copy.input,
outputs=pretrained_model.output
)
adapted_model = tf.keras.models.Sequential([
tf.keras.Input(shape=(None, None, n_input_channels)),
tf.keras.layers.Conv2D(first_layer_to_copy.filters,
first_layer_to_copy.kernel_size,
activation='relu',
padding='same'),
compatible_part_of_pretrained_model,
])
and this is the error stack
=================================== FAILURES ===================================
_ TestAdaptModelToInputChannelsIfNecessary.test_input_pass_without_raising_errors_through_model _
self = <tests.test_prepare_model.test_prepare_model.TestAdaptModelToInputChannelsIfNecessary object at 0x7f803a0ebaf0>
def test_input_pass_without_raising_errors_through_model(self):
"""Test that a previous incompatible tensor can pass through the adapted model without raising erros."""
# arrange
n_input_channels = 4
tensor_incompatible_with_pretrained_model = tf.random.uniform(
shape=(32, 32, n_input_channels), minval=0, maxval=1, dtype=tf.dtypes.float32, seed=42
)
pretrained_model = tf.keras.Sequential([
tf.keras.layers.Input(shape=(None, None, 3)),
tf.keras.layers.Conv2D(filters=16, kernel_size=(3, 3), activation='relu', padding='same'),
tf.keras.layers.Conv2D(filters=16, kernel_size=(3, 3), activation='relu', padding='same'),
tf.keras.layers.Conv2D(filters=1, kernel_size=(1, 1), activation='sigmoid')
])
> adapted_model = adapt_model_to_input_channels_if_necessary(pretrained_model, n_input_channels)
test_prepare_model.py:43:
inputs = [<tf.Tensor 'conv2d/Relu:0' shape=(None, None, None, 16) dtype=float32>]
outputs = [<tf.Tensor 'conv2d_2/Sigmoid:0' shape=(None, None, None, 1) dtype=float32>]
def _map_graph_network(inputs, outputs):
"""Validates a network's topology and gather its layers and nodes.
Arguments:
inputs: List of input tensors.
outputs: List of outputs tensors.
Returns:
A tuple `(nodes, nodes_by_depth, layers, layers_by_depth)`.
- nodes: list of Node instances.
- nodes_by_depth: dict mapping ints (depth) to lists of node instances.
- layers: list of Layer instances.
- layers_by_depth: dict mapping ints (depth) to lists of layer instances.
Raises:
ValueError: In case the network is not valid (e.g. disconnected graph).
"""
# "depth" is number of layers between output Node and the Node.
# Nodes are ordered from inputs -> outputs.
nodes_in_decreasing_depth, layer_indices = _build_map(outputs)
network_nodes = {
_make_node_key(node.layer.name, node.layer._inbound_nodes.index(node))
for node in nodes_in_decreasing_depth
}
nodes_depths = {} # dict {node: depth value}
layers_depths = {} # dict {layer: depth value}
for node in reversed(nodes_in_decreasing_depth):
# If the depth is not set, the node has no outbound nodes (depth 0).
depth = nodes_depths.setdefault(node, 0)
# Update the depth of the corresponding layer
previous_depth = layers_depths.get(node.layer, 0)
# If we've seen this layer before at a higher depth,
# we should use that depth instead of the node depth.
# This is necessary for shared layers that have inputs at different
# depth levels in the graph.
depth = max(depth, previous_depth)
layers_depths[node.layer] = depth
nodes_depths[node] = depth
# Update the depth of inbound nodes.
# The "depth" of a node is the max of the depths
# of all nodes it is connected to + 1.
for node_dep in node.parent_nodes:
previous_depth = nodes_depths.get(node_dep, 0)
nodes_depths[node_dep] = max(depth + 1, previous_depth)
# Handle inputs that are not connected to outputs.
# We do not error out here because the inputs may be used to compute losses
# and metrics.
for input_t in inputs:
input_layer = input_t._keras_history[0]
if input_layer not in layers_depths:
layers_depths[input_layer] = 0
layer_indices[input_layer] = -1
nodes_depths[input_layer._inbound_nodes[0]] = 0
network_nodes.add(_make_node_key(input_layer.name, 0))
# Build a dict {depth: list of nodes with this depth}
nodes_by_depth = collections.defaultdict(list)
for node, depth in nodes_depths.items():
nodes_by_depth[depth].append(node)
# Build a dict {depth: list of layers with this depth}
layers_by_depth = collections.defaultdict(list)
for layer, depth in layers_depths.items():
layers_by_depth[depth].append(layer)
# Get sorted list of layer depths.
depth_keys = list(layers_by_depth.keys())
depth_keys.sort(reverse=True)
# Set self.layers ordered by depth.
layers = []
for depth in depth_keys:
layers_for_depth = layers_by_depth[depth]
# Network.layers needs to have a deterministic order:
# here we order them by traversal order.
layers_for_depth.sort(key=lambda x: layer_indices[x])
layers.extend(layers_for_depth)
# Get sorted list of node depths.
depth_keys = list(nodes_by_depth.keys())
depth_keys.sort(reverse=True)
# Check that all tensors required are computable.
# computable_tensors: all tensors in the graph
# that can be computed from the inputs provided.
computable_tensors = set()
for x in inputs:
computable_tensors.add(id(x))
layers_with_complete_input = [] # To provide a better error msg.
for depth in depth_keys:
for node in nodes_by_depth[depth]:
layer = node.layer
if layer and not node.is_input:
for x in nest.flatten(node.keras_inputs):
if id(x) not in computable_tensors:
> raise ValueError('Graph disconnected: '
'cannot obtain value for tensor ' + str(x) +
' at layer "' + layer.name + '". '
'The following previous layers '
'were accessed without issue: ' +
str(layers_with_complete_input))
E ValueError: Graph disconnected: cannot obtain value for tensor Tensor("input_1:0", shape=(None, None, None, 3), dtype=float32) at layer "conv2d". The following previous layers were accessed without issue: []

how to initialize a Variable tensor for the weight matrix in a keras model?

I am trying to use tensors variables to use as weights in a keras layer..
I know that I can use numpy arrays instead but the reason I want to feed tensors is that I want my weight matrices to be of the type SparseTensor.
This is a small example that I have coded so far:
def model_keras(seed, new_hidden_size_list=None):
number_of_layers = 1
hidden_size = 512
hidden_size_list = [hidden_size] * number_of_layers
input_size = 784
output_size = 10
if new_hidden_size_list is not None:
hidden_size_list = new_hidden_size_list
weight_input = tf.Variable(tf.random.normal([784, 512], mean=0.0, stddev=1.0))
bias_input = tf.Variable(tf.random.normal([512], mean=0.0, stddev=1.0))
weight_output = tf.Variable(tf.random.normal([512, 10], mean=0.0, stddev=1.0))
# This gives me an error when trying to use in kernel_initializer and bias_initializer in the keras model
weight_initializer_input = tf.initializers.variables([weight_input])
bias_initializer_input = tf.initializers.variables([bias_input])
weight_initializer_output = tf.initializers.variables([weight_output])
# This works fine
#weight_initializer_input = tf.initializers.lecun_uniform(seed=None)
#bias_initializer_input = tf.initializers.lecun_uniform(seed=None)
#weight_initializer_output = tf.initializers.lecun_uniform(seed=None)
print(weight_initializer_input, bias_initializer_input, weight_initializer_output)
model = keras.models.Sequential()
for index in range(number_of_layers):
if index == 0:
# input layer
model.add(keras.layers.Dense(hidden_size_list[index], activation=nn.selu, use_bias=True,
kernel_initializer=weight_initializer_input,
bias_initializer=bias_initializer_input,
input_shape=(input_size,)))
else:
model.add(keras.layers.Dense(hidden_size_list[index], activation=nn.selu, use_bias=True,
kernel_initializer=weight_initializer_hidden,
bias_initializer=bias_initializer_hidden))
# output layer
model.add(keras.layers.Dense(output_size, use_bias=False, kernel_initializer=weight_initializer_output))
model.add(keras.layers.Activation(nn.softmax))
return model
I am using tensorflow 1.15.
Any idea how one can use custom (user defined) Tensor Variables as initializer instead of pre-set schemes (e.g. Glorot, Truncated Normal etc). Another approach that I could take is to explicitly define the computations instead of using the keras.Layer.
Many thanks
Your code works after enabling eager execution.
import tensorflow as tf
tf.compat.v1.enable_eager_execution()
Add this at the top of you file.
See this for working code.

models.set_weights() gives None

I am trying to build an ensamble DNN model. I train e.g. 5 models, take the weights and average them. After that I wanted to clone a first model and assign the new weights. But it does not work.
The Model is built like this:
def build_DNN_model(self):
# initialize the DNN
ann = tf.keras.models.Sequential()
# add first hidden layer
num_neurons = self.num_neurons
ann.add(tf.keras.layers.Dense(units=num_neurons, activation='relu', kernel_initializer=tf.constant_initializer(1.)))
ann.add(tf.keras.layers.Dropout(0.5))
# add second hidden layer
ann.add(tf.keras.layers.Dense(units=num_neurons, activation='relu'))
ann.add(tf.keras.layers.Dropout(0.5))
# add output layer
ann.add(tf.keras.layers.Dense(units=1))
# compile
ann.compile(optimizer='adam', loss='mean_squared_error')
return ann
Then the model is fitted to the data, actually I do 5 models, and fit all of them to the same data.
After that I create a list of KerasModel Objects, called "members".
And now I would like to assign my new weights to a clone of one of the models. But even if I do that:
members[0].set_weights(members[0].get_weights())
it returns me None.
I use Tensoflow 2 version.
I would appreciate your help very much.
You should define the input shape in your first layer of the model
after doing this I simply create 2 models like yours (m1,m2) and assign to m2 the same weights to m1... they are the same
def build_DNN_model(input_dim):
# initialize the DNN
ann = tf.keras.models.Sequential()
# add first hidden layer
num_neurons = 32
ann.add(tf.keras.layers.Dense(units=num_neurons, activation='relu',
kernel_initializer=tf.constant_initializer(1.),
input_dim=input_dim))
ann.add(tf.keras.layers.Dropout(0.5))
# add second hidden layer
ann.add(tf.keras.layers.Dense(units=num_neurons, activation='relu'))
ann.add(tf.keras.layers.Dropout(0.5))
# add output layer
ann.add(tf.keras.layers.Dense(units=1))
# compile
ann.compile(optimizer='adam', loss='mean_squared_error')
return ann
m1 = build_DNN_model((100))
m2 = build_DNN_model((100))
m2.set_weights(m1.get_weights())
# check the weights
[(w1==w2).all() for w1,w2 in zip(m1.get_weights(),m2.get_weights())]
# [True, True, True, True, True, True]
the notebook
EDIT1: assign random weights to m1:
m1.set_weights([np.random.uniform(0,1, i.shape) for i in m1.get_weights()])
EDIT2: here you find the working implementation of model_weight_ensemble in your contest from https://machinelearningmastery.com/polyak-neural-network-model-weight-ensemble/
Creating a simple model:
def create_model1():
model = tf.keras.Sequential()
model.add(tf.keras.Input(shape=(13,)))
model.add(tf.keras.layers.Dense(units = 6, activation='relu', name = 'd1'))
model.add(tf.keras.layers.Dense(units = 2, activation='softmax', name = 'd2'))
return model
Model Architecture:
Looking at layers:
model.layers
Ouput:
[<tensorflow.python.keras.layers.core.Dense at 0x2193acc95c8>,
<tensorflow.python.keras.layers.core.Dense at 0x2193ad3ad08>]
Looking at the weights of 2nd dense layer:
model.layers[1].weights
Output:
[<tf.Variable 'd2/kernel:0' shape=(6, 2) dtype=float32, numpy=
array([[ 0.11061734, 0.61788374],
[ 0.31208295, 0.19295567],
[-0.6812483 , 0.05383837],
[ 0.39284903, 0.69312006],
[-0.519426 , 0.67820543],
[-0.7337165 , 0.11025453]], dtype=float32)>,
<tf.Variable 'd2/bias:0' shape=(2,) dtype=float32, numpy=array([0., 0.], dtype=float32)>]
Setting weights:
new_weights = [tf.random.uniform(shape = (6,2)), tf.random.uniform(shape = (2,))]
model.layers[1].set_weights(new_weights)
For setting weights the shape of new_weights should match the shape of weights of that particular layer.
Here, new_weights is a list containing two values. 1st element is the weight of the kernel and 2nd element is the weight for bias.