Sparse Neural Networks with Tensorflow - tensorflow

I have taken a few deeplearning.ai courses, all of which focus on fully-connected network topologies which are all interconnected while neglecting sparsely connected ones.
I am wondering if I can use Tensorflow to calculate both the forward and backward propagation of a sparse neural network, like the following:
I'm assuming traditional matrix operations won't work for this type of topology, since not all nodes are connected equally.

Related

what does it mean when weights of a layer is not normally distributed

I plot all my weights of my neural network on tensorboard, I found that some
weights of some layer is normally distributed:
but, some are not.
what does this imply? should I increase or decrease the capacity of this layer?
Update:
My network is a LSTM-based netowrk. the non-normal distributed weights is the weights multiply with input feature, the normal distributed weights is the weights multiply with states.
one explanation base on convolutional networks might be this(I don't know if this is true for any other kind of artificial neural models or not), hence the first layer tries to find distinct small features weights are distributed very widely and network tries to find any useful feature it can, then in the next layers combination of these distinct features are used, which make sense to put a normal distribution of weights hence every one of the previous features are going to be part of a single bigger or more representative feature in next layers.
but this was only my intuition I am not sure if this is the reason with proof now.

Irregular neural networks with tensorflow?

I'm building Neural Evolution of Augmented Topologies and I'm looking for a way to optimize my algorithm. The network represents an irregula set of connections between neurons.
I'm not very familiar with tensorflow, but I suppose that there is a way to use it here.
I need to iterate through the network many times in quite a big interval of time. So, it gets very slow when the net is very big.
The network can be of any structure: a genetic algorithm evolves the network. Every neuron can have different activation functions.
Any suggestions?

Is TensorFlow only limited to neural networks?

Is the TensorFlow designed only for implementing neural networks? Can it be used as a general machine learning library -- for implementing all sorts of supervised as well as unsupervised techniques (naive baysian, decision trees, k-means, SVM to name a few) ? Whatever TensorFlow literature I am coming across is generally talking about neural networks. Probably graph based architecture of TensorFlow makes it suitable candidate for neural nets. But can it be also used as a general Machine Learning framework?
Tensorflow does include additional machine learning algorithms such as:
K-means clustering
Random Forests
Support Vector Machines
Gaussian Mixture Model clustering
Linear/logistic regression
The above list is taken from here, so you can read this link for more details.

combine layers from different neural networks

I am using tensorflow to train two instances of the same neural network with two different datasets. the network itself is quite simple with an input and output layer and 6 hidden layers (each layer is a 20 meurons followed by a non-linear activation function).
I can train the network with two different datasets and that is fine. Now, what i want to do is basically create a new network which is a combination of these two trained networks. In particular, I want the input and the first 3 layers to be from one of the trained network and the last 3 layers and the output layer to be from the other network. I am very new to tensorflow and have not found a way to do this. Can someone point me to the API or some way to do this sort of hybrid networks?
Constructing your network with Keras will make this easy; see the keras documentation for how to reuse layers across networks.
You might be asking about multitask learning aspect,well it can be simplified by seperating the weight matrix of each individual variables trained with different datasets and sum there weight layers individually to a sharable_weight_layer variable after a, b trained networks and finally evaluate your model as summed network in multitasking method.

How does a single tensorflow deep neural network scale in performance with multiple gpus?

I have read that convolution networks scale very well across multiple gpus, but what about deep neural networks that don't use convolutions? The Tensorflow website provides a multiple gpu example, but it uses convolutions.