Related
Let's say I want to predict the winner of a tag-team race, where some drivers are more usually place higher in certain weather conditions:
Race |Driver | Weather | Time
Dummy1 |D1 | Rain | 2:00
Dummy1 |D2 | Rain | 5:00
Dummy1 |D3 | Rain | 4:50
Dummy2 |D1 | Sunny | 3:00
Dummy2 |D2 | Sunny | 2:50
Dummy2 |D2 | Sunny | 2:30
...
The logic is that a team composed of D1 and D3 would outperform any other combination on Rain, but wouldn't have the same luck on other weather. With that said, I thought about the following model:
Layer 1 | Layer 2 | Layer 3 (output)
Driver encoding | weather encoding | expected race time
----------------------------------------------------------------
Input of 0 or 1 | sum(Layer 1 * weights | sum(Layer 2 * weights)
| * Input of 0 or 1) |
This means that layer 2 uses layer 1 as well as input values to compute a value.
The reason I want this architecture instead of having every feature on layer 1 is that I want different features to multiply each other instead of their sum.
I could not find anything like this, but it is probably just me not knowing the name of this approach. Can someone point me to sources or explain know how to replicate this on tensorflow/pytorch/any other lib?
Turns out it was actually pretty simple, for anyone that might stumble upon this post and would like to test this approach, here's rough code:
Racing dataset
# TEAM 1 TEAM 2 "Weather" "WON"
# "A","B","C","D","E", "A","B","C","D","E", W1 W2 W3 (combined times of team 1< combined times of team 2)
dataset=[[ 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1],
[ 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1],
[ 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1],
[ 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1],
[ 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0],
[ 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0],
[ 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0],
[ 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0],
[ 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0],
[ 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1],
]
inputs=[[x[0:-4],x[-4:-1]] for x in dataset]
results=[[x[-1]] for x in dataset]
Typings to make code more readable
from typing import Iterator
class InputLayer():
def __init__(self, inputs,useBias=False):
self.inputs=inputs
self.useBias=useBias
def __str__(self):
return "Layer of size "+ str(self.inputs)
def __repr__(self) -> str:
return self.__str__()
class InputLayerValue():
def __init__(self, values):
self.values=np.array(values)
Actual model
import torch
from torch import nn
class MutipleInputModel(nn.Module):
def __init__(self,input_layers:Iterator[InputLayer],output_size):
super(MutipleInputModel, self).__init__()
print(input_layers)
self.nns=[]
for i in range(len(input_layers)-1):
current:InputLayer=input_layers[i]
next:InputLayer=input_layers[i+1]
il=nn.Linear(current.inputs,next.inputs,current.useBias)
#To have hidden layers, you need to either use another model or create and attach multiple Linear models - nn.Linear(next.inputs,next.inputs)
name="nn"+str(i)
#models must be directly under self to be found by model.parameters()
self.__setattr__(name,il)
self.nns.append(name)
il=nn.Linear(input_layers[-1].inputs,output_size,current.useBias)
name="nnOutput"
self.__setattr__(name,il)
self.nns.append(name)
def forward(self, inputs:Iterator[InputLayerValue]):
inputsLen=len(inputs[0])
if inputsLen != len(self.nns):
raise Exception("Number of input values provided and input layers must be equal. Provided "+str(inputsLen)+" sets of inputs for a "+str(len(self.nns))+"-input-layer network")
#Initialize first layer of inputs with ones which will then be multiplied by the actual input values
lastOutput=torch.ones(len(inputs),len(inputs[0][0].values)) # Layer 1 Outputs | Layer 2 provided Inputs | Layer 2 actual Inputs
for i in range(inputsLen): # lastOutput | multiplier | input
multiplier=torch.from_numpy(np.array([x[i].values for x in inputs])).float() # 0.2 | 0 | 0
input=lastOutput*multiplier # 1.5 | 1 | 1.5
lastOutput=self.__getattr__(self.nns[i])(input) # 1.0 | 5 | 5
return lastOutput
Training
# Define hyperparameters
model = MutipleInputModel(input_layers=[InputLayer(len(x)) for x in inputs[0]],output_size=1)
n_epochs = 1000
lr=0.001
criterion = nn.BCEWithLogitsLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
for epoch in range(1, n_epochs + 1):
optimizer.zero_grad() # Clears existing gradients from previous epoch
output = model([[InputLayerValue(y) for y in x] for x in inputs])
loss = criterion(output, torch.from_numpy(np.array(results)).float())
loss.backward()
optimizer.step()
print('Epoch: {}/{}.............'.format(epoch, n_epochs), end=' ')
print("Loss: {:.4f}".format(loss.item()))
Testing:
def predict(model, input):
input = [[InputLayerValue(y) for y in input]]
out = model(input)
return nn.Sigmoid()(out[0][0]).item()
print(predict(model,[[1, 1, 0, 0, 0, 0, 0, 1, 1, 0], [1, 0, 0]]))
print(predict(model,[[1, 1, 0, 0, 0, 0, 0, 1, 1, 0], [0, 1, 0]]))
print(predict(model,[[1, 1, 0, 0, 0, 0, 0, 1, 1, 0], [0, 0, 1]]))
This is a really basic implementation, but could easily be modified to have hidden layers.
Clearly needs further testing to see if it is actually better than a traditional NN, but I would say it is great for NN explainability.
Hi there I have an np array with zeros and ones. I would like to check every 4 values, and if there is at least one (1) to put all four values equal to (1). Else leave all them to zero.
do you know how to do it? thanks here is a sample
np= [ 0 0 0 0 1 1 1 1 0 0 1 0 0 0 0 0 ]
np_corrected=np= [ 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 ]
many thanks, hope the question is now clear!
Probably not the shortest solution but definitely working and fast:
reshape
create a mask
apply the mask and get the result:
import numpy as np
array = np.array([0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0])
array
groups = array.reshape(-1, 4) # group every 4 elements into new columns
groups
mask = groups.sum(axis=1)>0 # identify groups with at least one '1'
mask
np.logical_or(groups.T, mask).T.astype(int).flatten()
# swap rows and columns in groups, apply mask, swap back,
# replace True/False with 1/0 and restore original shape
Returns (in Jupyter notebook):
array([0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0])
array([[0, 0, 0, 0],
[1, 1, 1, 1],
[0, 0, 1, 0],
[0, 0, 0, 0]])
array([False, True, True, False])
array([0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0])
I have target data y_tr in shape (92107, 49) where each sample (row) is an array of length 49. I would like to loop over y_tr and one hot encode each array. I have been using keras to_categorical though I am facing an Indexerror.
Array in y_tr:
array([ 379, 379, 379, 252, 4166, 391, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0], dtype=int32)
The number of classes to encode each integer index = 10,000. (I do not wish to encode zero values):
onehots=[]
for seq in y_tr[:,1:]:
try:
onehots.append(to_categorical(seq - 1, num_classes=tar_vocab_size, dtype=np.int32))
except IndexError as e:
raise e
76 n = y.shape[0]
77 categorical = np.zeros((n, num_classes), dtype=dtype)
---> 78 categorical[np.arange(n), y] = 1
79 output_shape = input_shape + (num_classes,)
80 categorical = np.reshape(categorical, output_shape)
IndexError: index 28350 is out of bounds for axis 1 with size 10000
Let's say I have an array that looks like this:
a = np.array([0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0])
I want to fill the values that are between 1's with 1's.
So this would be the desired output:
a = np.array([0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0])
I have taken a look into this answer, which yields the following:
array([0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1])
I am sure this answer is really close to the output I want. However, although tried countless times, I can't change this code into making it work the way I want, as I am not that proficient with numpy arrays.
Any help is much appreciated!
Try this
b = ((a == 1).cumsum() % 2) | a
Out[10]:
array([0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0], dtype=int32)
From #Paul Panzer: use ufunc.accumulate with bitwise_xor
b = np.bitwise_xor.accumulate(a)|a
Try this:
import numpy as np
num_lst = np.array(
[0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0])
i = 0
while i < len(num_lst): # Iterate through the list
if num_lst[i]: # Check if element is 1 at i-th position
if not num_lst[i+1]: # Check if next element is 0
num_lst[i+1] = 1 # Change next element to 1
i += 1 # Continue through loop
else: # Check if next element is 1
i += 2 # Skip next element
else:
i += 1 # Continue through loop
print(num_lst)
This is probably not the most elegant way to execute this, but it should work. Basically, we loop through the list to find any 1s. When we find an element that is 1, we check if the next element is 0. If it is, then we change the next element to 1. If the next element is 1, that means we should stop changing 0s to 1s, so we jump over that element and proceed with the iteration.
I am trying to figure out how to do this transformation symbolically in theano a matrix of undetermined size
From:
[[0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1],
.
.
]
To:
[[1, 2, 3, 0, 1, 2, 3, 4, 5, 0, 0, 1, 0, 1, 2, 3, 0],
[1, 2, 3, 4, 5, 6, 7, 8, 0, 1, 2, 0, 0, 0, 0, 0, 0],
.
.
]
So for every consecutive 0 I want an increasing range and whenever I stumble on a 1 the range resets.
Here's one way to do it, using inefficient scans:
import theano
import theano.tensor as tt
def inner_step(x_t_t, y_t_tm1):
return tt.switch(x_t_t, 0, y_t_tm1 + 1)
def outer_step(x_t):
return theano.scan(inner_step, sequences=[x_t], outputs_info=[0])[0]
def compile():
x = tt.bmatrix()
y = theano.scan(outer_step, sequences=[x])[0]
return theano.function([x], y)
def main():
f = compile()
data = [[0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1]]
print f(data)
main()
When run, this prints:
[[1 2 3 0 1 2 3 4 5 0 0 1 0 1 2 3 0]
[1 2 3 4 5 6 7 8 0 1 2 0 0 0 0 0 0]]