I want to apply one hot encoding to my categorical features. I see how one can use tf.one_hot to do that but one_hot accepts indices so I'd need to map the tokens to indices. But all of the examples I've found are computing the vocab over the entire dataset. I don't want to do that as I have hard-coded dict of possible values. Something like:
CATEG = {
'feature1': ['a', 'b', 'c'],
'feature2': ['foo', 'bar']
}
I just need the proprocessing_fn to simply map the tokens to an index then run it through tf.one_hot. How can I do that?
For example, tft.apply_vocabulary sounds like what I need but then I see that it takes a deferred_vocab_filename_tensor of type common_types.TemporaryAnalyzerOutputType? The description says:
The deferred vocab filename tensor as returned by tft.vocabulary, as long as the frequencies were not stored.
And I see that tft.vocabulary is again computing the vocab:
Computes The unique values taken by x, which can be a Tensor or CompositeTensor of any size. The unique values will be aggregated over all dimensions of x and all instances.
Why doesn't something simple like this exist?
The simplest option is probably to use tf.equal as follows
import tensorflow as tf
CATEG = {
'feature1': ['a', 'b', 'c'],
'feature2': ['foo', 'bar']
}
tokens = tf.constant(CATEG['feature2'])
inputs = tf.constant(["foo", "foo", "bar", "none"])
onehot = tf.cast(tf.expand_dims(tokens, 1) == tf.expand_dims(inputs, 0), dtype=tf.float32)
print(onehot)
# [[1., 1., 0., 0.],
# [0., 0., 1., 0.]]
Add batch dims if needed.
Related
What is the easiest way (I am looking for the minimum number of code lines) to convert a pandas dataframe of 4 columns into a 3d tensor padding the missing values along the way.
import pandas as pd
# initialize data of lists.
data = {'Animal':['Cat', 'Dog', 'Dog', 'Dog'],
'Country':["USA", "Canada", "USA", "Canada"],
'Likes': ['Petting', 'Hunting', 'Petting', 'Petting'],
'Age':[1, 2, 3, 4]}
# there are no duplicate lines in terms of Animal, Country and Likes, so I do not need any aggregation function
# Create DataFrame
dfAnimals = pd.DataFrame(data)
dfAnimals
I want to create a 3d tensor with shape (2, 2, 2) --> (Animal, Country, Likes) and Age is the value. I also want to fill the missing values with 0
There might be a solution with fewer lines and more optimized library calls, but this seems to do the trick:
import pandas as pd
import numpy as pd
import torch
data = ...
df = pd.DataFrame(data)
CAT = df.columns.tolist()
CAT.remove("Age")
# encode categories as integers and extract the shape
shape = []
for c in CAT:
shape.append(len(df[c].unique()))
df[c] = df[c].astype("category").cat.codes
shape = tuple(shape)
# get indices as tuples and corresponding values
idx = [tuple(t) for t in df.values[:,:-1]]
values = df.values[:,-1]
# init final matrix with zeros and fill it from indices
A = np.zeros(shape)
for i, v in zip(idx,values):
A[i] = v
# convert to pytorch tensor
A = torch.tensor(A)
print(A)
tensor([[[0., 0.],
[0., 1.]],
[[2., 4.],
[0., 3.]]], dtype=torch.float64)
consider the following code:
dog = np.random.rand(10, 10)
frog = pd.DataFrame(dog, columns = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'])
from sklearn.preprocessing import StandardScaler
slog = StandardScaler()
mog = slog.fit_transform(frog.values)
frog[frog.columns] = mog
OK, now we should have a dataframe whose values should be the standard-scaled array. But:
frog.describe()
gives:
[![describe the dataframe][1]][1]
Note that the standard deviation is 1.05
While
np.std(mog, axis=0)
Gives the expected:
array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
What gives?
The standard deviation computed by the describe method uses the sample standard deviation, while StandardScaler uses the population standard deviation. The only difference between the two is whether the sum of the squared differences from the mean is divided by n-1 (for the sample st. dev.) or n (for the pop. std. dev.).
numpy.std computes the population st. dev. by default, but you can use it to compute the sample st. dev. by adding the argument ddof=1, and the result agrees with the values computed by describe:
In [54]: np.std(mog, axis=0, ddof=1)
Out[54]:
array([1.05409255, 1.05409255, 1.05409255, 1.05409255, 1.05409255,
1.05409255, 1.05409255, 1.05409255, 1.05409255, 1.05409255])
Suppose I have a numpy ndarray of shape (2,4) as follows
>>> array1 = numpy.random.rand(2,4)
array([[ 0.87791012, 0.84566058, 0.73877908, 0.40377929],
[ 0.9669688 , 0.15913901, 0.70374509, 0.95776427]])
I have second array of shape (2,) as follows
>>> array2 = numpy.random.rand(2)
array([ 0.57126204, 0.67938752])
I would like to compare both the arrays along the column dimension to find the elements in array1 that are greater than array2 (elementwise). The desired result is
array([[ 1., 1., 1., 0.],
[ 1., 0., 1., 1.]])
If both have the same dimensions, I can directly use (array1 > array2).astype(int). In case of array1 being a multidimensional array with more than one column, I am using the following method involving a loop
results = np.zeros_like(array1)
for each in range(array1.shape[1]):
results[:,each] = array1[:,each] > array2
Is there a more pythonic/numpy way of doing it?
Reshape array2 to 2d array with shape (2,1), then the comparison should work due to numpy broadcasting:
(array1 > array2[:,None]).astype(int)
#array([[1, 1, 1, 0],
# [1, 0, 1, 1]])
Given the ndarray:
A = np.array([np.array([1], dtype='f'),
np.array([2, 3], dtype='f'),
np.array([4, 5], dtype='f'),
np.array([6], dtype='f'),
np.array([7, 8, 9], dtype='f')])
which displays as:
A
array([array([ 1.], dtype=float32), array([ 2., 3.], dtype=float32),
array([ 4., 5.], dtype=float32), array([ 6.], dtype=float32),
array([ 7., 8., 9.], dtype=float32)], dtype=object)
I am trying to create a new array from the first elements of each "sub-array" of A. To show you what I mean, below is some code creating the array that I want using a loop. I would like to achieve the same thing but as efficiently as possible, since my array A is quite large (~50000 entries) and I need to do the operation many times.
B = np.zeros(len(A))
for i, val in enumerate(A):
B[i] = val[0]
B
array([ 1., 2., 4., 6., 7.])
Here's an approach that concatenates all elements into an 1D array and then select the first elements by linear-indexing. The implementation would look like this -
lens = np.array([len(item) for item in A])
out = np.concatenate(A)[np.append(0,lens[:-1].cumsum())]
The bottleneck would be with the concatenation part, but that might be offsetted if there are huge number of elements with small lengths. So, the efficiency would depend on the format of the input array.
I suggest transforming your original jagged array of arrays into a single masked array:
B = np.ma.masked_all((len(A), max(map(len, A))))
for ii, row in enumerate(A):
B[ii,:len(row)] = row
Now you have:
[[1.0 -- --]
[2.0 3.0 --]
[4.0 5.0 --]
[6.0 -- --]
[7.0 8.0 9.0]]
And you can get the first column this way:
B[:,0].data
This post identifies a "feature" that I would like to disable.
Current numpy behavior:
>>> a = arange(10)
>>> a[a>5] = arange(10)
array([0, 1, 2, 3, 4, 5, 0, 1, 2, 3])
The reason it's a problem: say I wanted an array to have two different sets of values on either side of a breakpoint (e.g., for making a "broken power-law" or some other simple piecewise function). I might accidentally do something like this:
>>> x = empty(10)
>>> a = arange(10)
>>> x[a<=5] = 0 # this is fine
>>> x[a>5] = a**2 # this is not
# but what I really meant is this
>>> x[a>5] = a[a>5]**2
The first behavior, x[a>5] = a**2 yields something I would consider counterintuitive - the left side and right side shapes disagree and the right side is not scalar, but numpy lets me do this assignment. As pointed out on the other post, x[5:]=a**2 is not allowed.
So, my question: is there any way to make x[a>5] = a**2 raise an Exception instead of performing the assignment? I'm very worried that I have typos hiding in my code because I never before suspected this behavior.
I don't know of a way offhand to disable a core numpy feature. Instead of disabling the behavior you could try using np.select:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.select.html
In [110]: x = np.empty(10)
In [111]: a = np.arange(10)
In [112]: x[a<=5] = 0
In [113]: x[a>5] = a**2
In [114]: x
Out[114]: array([ 0., 0., 0., 0., 0., 0., 0., 1., 4., 9.])
In [117]: condlist = [a<=5,a>5]
In [119]: choicelist=[0,a**2]
In [120]: x = np.select(condlist,choicelist)
In [121]: x
Out[121]: array([ 0, 0, 0, 0, 0, 0, 36, 49, 64, 81])