I am new to lightgbm. I have big data (billions of rows constantly updated). The dataset prepared for training is also wide with around 400 columns.
I have 2 questions:
First, my kernel keeps dying after some thousands epochs even for such a small subset as 10 000 rows. Memory use keeps rising while training untill it fails. I have 126 gigabytes of memory.
I have tried training with different parameters, commented are the one that are tried as well
parameters = {
'histogram_pool_size': 5000,
'objective': 'regression',
'metric': 'l2',
'boosting': 'dart',#'gbdt
'num_leaves': 10, #100
'learning_rate': 0.01,
'verbose': 0,
'max_bin': 66,.
'force_col_wise':True, #default
'max_bin': 6, #60 #default
'max_depth': 10, #default
'min_data_in_leaf': 30, #default
'min_child_samples': 20,#default
'feature_fraction': 0.5,#default
'bagging_fraction': 0.8,#default
'bagging_freq': 40,#default
'bagging_seed': 11,#default
'lambda_l1': 2 #default
'lambda_l2': 0.1 #default }
Limiting number of columns seems to help, but I know that some columns that have low score with global feature importance would have significant importance in some local scope.
Second, what is the right way of training lightgbm with big data incrementally and updating lightgbm model with new data? I previously worked mainly with neural nets, which are trained incrementally by nature and I know that trees do not works this way and though it's technically possible to update the model it will not be the same as the model that is trained in a holistic way. How to deal with it?
full code:
# X is dataframe
cat_names = X.select_dtypes(['bool','category',object]).columns.tolist()
for c in cat_names: X[c] = X[c].astype('category')
cat_cols = [c for c, col in enumerate(cat_names)]
X[cat_names] = X[cat_names].apply(lambda x: x.cat.codes)
x = X.values
x_train, x_valid, y_train, y_valid = train_test_split(x, y, test_size=0.2, random_state=42)
train_ds = lightgbm.Dataset(x_train, label=y_train)
valid_ds = lightgbm.Dataset(x_valid, label=y_valid)
model = lightgbm.train(parameters,
train_ds,
valid_sets=valid_ds,
categorical_feature = cat_cols,
num_boost_round=2000,
early_stopping_rounds=50)
Changing data types to less verbose fixed the memory problem! If your dataset is pandas dataframe do something like this:
ds[ds.select_dtypes('float64').columns] = ds.select_dtypes('float64').astype('float32')
ds[ds.select_dtypes('int64').columns] = ds.select_dtypes('int64').astype('int32')
!!! caution Your data ranges may be out of the selected datatype range and pandas will mess up your data in that case. For example int8 dtype is ranges only within -128 to 127, so select the ones that are capable to handle your data.
You may check selected dtype range with
import numpy as np
np.iinfo('int32').min, np.iinfo('int32').max
Related
I have .stem.mp4 files each of which is composed of multiple audio sources.
Each length of file is 2 minutes to 6 minutes. It varies a lot.
When I try to make tf.data.Dataset out of it, it seems to take a lot of time to generate a input_batch much more than my model makes a prediction of a given batch.
Let me illustarte an example.
import tensorflow as tf
import tensorflow.keras as keras
sample_data = tf.random.normal((5, 755200, 2)) # 5 sources of audio, stereo channel
# First axis is the mixture of the audio, so this is the input
# Rest 4 axes are the each source of the audio(eg. bass, drum, vocals, etc) so these are the output
input_mixture = sample_data[0, :, :]
target_mixtures = sample_data[1:, :, :]
target_mixtures = np.column_stack(target_mixtures)
length = 44100 * 11 # I want to split these into length of 11 seconds
strides = 44100 # 1 second stride
ds_inp = tf.data.Dataset.from_tensor_slices((input_mixture))
ds_inp = ds_inp.window(length, shift=strides, drop_remainder=True)
ds_inp = ds_inp.flat_map(lambda windows: windows.batch(length))
ds_inp = ds_inp.map(lambda windows: windows, num_parallel_calls=tf.data.AUTOTUNE)
ds_tar = tf.data.Dataset.from_tensor_slices((target_mixtures))
ds_tar = ds_tar.window(length, shift=strides, drop_remainder=True)
ds_tar = ds_tar.flat_map(lambda windows: windows.batch(length))
ds_tar = ds_tar.map(lambda windows: windows, num_parallel_calls=tf.data.AUTOTUNE)
ds_total = [ds_inp, ds_tar]
total_ds = tf.data.Dataset.zip(tuple(ds_total))
total_ds = total_ds.batch(BATCH_SIZE)
total_ds = total_ds.prefetch(tf.data.AUTOTUNE)
This is how I made a tf.data.Dataset from the given file.
And when I measure the time how fast does this make a input_batch and output_batch,
%%time
for i, j in total_ds.take(1):
pass
# Wall time: 18.3 s
My model has about 100 million variables, but since it fairly has a simple structure so that it takes about 6 seconds to generate a predicted_batch out of given input_batch.
So my problem is, is there any way to make it to generate input_batch, output_batch faster?
(My assumption is that, as this 'window' the given arrays, there is no better way to improve this.)
Obviously all of the files are big enough not to be cached.
I am having an issue at the moment, I think im making it far more complicated than it needs to be. my csv file is 31 rows by 500. I need to import this, split it in a 70/30 ratio and then be able to use the first column as my 'y' value for a neural network, and the remaining 30 columns need to be my 'x' value.
ive implemented the below code to do this, but when I run it through my basic sigmoid and testing functions, it provides results in a weird format i.e. [6.54694655e-06].
I believe this is due to my splitting/importing of the data, which I think I have done wrong. I need to import the data into arrays that are readable by my functions, and be able to separate my first column specifically to a 'y' value. how do I go about this?
df = pd.read_csv(r'data.csv', header=None)
df.to_numpy()
#splitting data 70/30
trainingdata= df[:329]
testingdata= df[:141]
#converting data to seperate arrays for training and testing
training_features= trainingdata.loc[:, trainingdata.columns != 0].values.reshape(329,30)
training_labels = trainingdata[0]
training_labels = training_labels.values.reshape(329,1)
testing_features = testingdata[0]
testing_labels = testingdata.loc[:, testingdata.columns != 0]
Usually for splitting the dataframe on test and train data I use sklearn.model_selection.train_test_split. Documentation here.
Some other methods are described here Hope this will help you!
Make you train/test split easy by using sklearn.model_selection.train_test_split.
If you don't have sklearn installed, first install it by running pip install -U scikit-learn.
Then
from sklearn.model_selection import train_test_split
df = pd.read_csv(r'data.csv', header=None)
# X is your features, y is your target column
X = df.loc[:,1:]
y = df.loc[:,0]
# Use train_test_split function with test size of 30%
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=42)
df = pd.read_csv(r'data.csv')
df.to_numpy()
print(df)
Is there any way in federated-tensorflow to make clients train the model for multiple epochs on their dataset? I found on the tutorials that a solution could be modifying the dataset by running dataset.repeat(NUMBER_OF_EPOCHS), but why should I modify the dataset?
The tf.data.Dataset is the TF2 way of setting this up. It maybe useful to think about the code as modifying the "data pipeline" rather than the "dataset" itself.
https://www.tensorflow.org/guide/data and particularly the section https://www.tensorflow.org/guide/data#processing_multiple_epochs can be useful pointers.
At a high-level, the tf.data API sets up a stream of examples. Repeats (multiple epochs) of that stream can be configured as well.
dataset = tf.data.Dataset.range(5)
for x in dataset:
print(x) # prints 0, 1, 2, 3, 4 on separate lines.
repeated_dataset = dataset.repeat(2)
for x in repeated_dataset:
print(x) # same as above, but twice
shuffled_repeat_dataset = dataset.shuffle(
buffer_size=5, reshuffle_each_iteration=True).repeat(2)
for x in repeated_dataset:
print(x) # same as above, but twice, with different orderings.
I am doing a task on traffic analysis and I am stymied with some error in my code. My data rows are like this:
qurter | DOW (Day of week)| Hour | density | speed | label (predicted speed for another half an hour)
The values are like this:
1, 6, 19, 23, 53.32, 45.23
Which means in some specific street during 1st quarter of 19 o'clock on Friday, density of traffic is measured 23 and current speed is 53.32. the predicted speed would be 45.23.
The task is to predict the speed for another half an hour by predictors given above.
I am using this code to build a TensorFlow DNNRegressor for data:
import pandas as pd
data = pd.read_csv('dataset.csv')
X = data.iloc[:,:5].values
y = data.iloc[:, 5].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2, random_state=0)
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaler.fit(X_train)
X_train = pd.DataFrame(data=scaler.transform(X_train),columns = ['quarter','DOW','hour','density','speed'])
X_test = pd.DataFrame(data=scaler.transform(X_test),columns = ['quarter','DOW','hour','density','speed'])
y_train = pd.DataFrame(data=y_train,columns = ['label'])
y_test = pd.DataFrame(data=y_test,columns = ['label'])
import tensorflow as tf
speed = tf.feature_column.numeric_column('speed')
hour = tf.feature_column.numeric_column('hour')
density = tf.feature_column.numeric_column('density')
quarter= tf.feature_column.numeric_column('quarter')
DOW = tf.feature_column.numeric_column('DOW')
feat_cols = [h_percentage, DOW, hour, density, speed]
input_func = tf.estimator.inputs.pandas_input_fn(x=X_train,y=y_train ,batch_size=10,num_epochs=1000,shuffle=False)
model = tf.estimator.DNNRegressor(hidden_units=[5,5,5],feature_columns=feat_cols)
model.train(input_fn=input_func,steps=25000)
predict_input_func = tf.estimator.inputs.pandas_input_fn(
x=X_test,
batch_size=10,
num_epochs=1,
shuffle=False)
pred_gen = model.predict(predict_input_func)
predictions = list(pred_gen)
final_preds = []
for pred in predictions:
final_preds.append(pred['predictions'])
from sklearn.metrics import mean_squared_error
mean_squared_error(y_test,final_preds)**0.5
when I run this code, It throws an error with this ending:
TypeError: Failed to convert object of type <class 'dict'> to Tensor. Contents: {'label': <tf.Tensor 'fifo_queue_DequeueUpTo:6' shape=(?,) dtype=float64>}. Consider casting elements to a supported type.
First of all what is the concept of error? I couldn't find source for reason of error to deal with it. And how can I modify code for solution?
secondly does it improve the model performance to use tensorflow categorical_column_with_identity instead of numeric_columns for DOW which indicates days of week?
I also want to know if it's useful to merge quarter and hour as a single column like day time (quarter is minutes in an hour which is going to be normalized between 0 and 1)?
First of all what is the concept of error? I couldn't find source for
reason of error to deal with it. And how can I modify code for
solution?
Let me first talk about the solution to the problem. You need to change parameter y in pandas_input_fn as follows.
input_func = tf.estimator.inputs.pandas_input_fn(x=X_train,y=y_train['label'],batch_size=10,num_epochs=1000,shuffle=False)
It seems that the parameters y in pandas_input_fn doesn't support dataframe type when you run to model.train(). pandas_input_fn parses every sample y to a form similar to {columnname: value} in this case, but model.train() can't recognize it. So you need to pass series type.
secondly does it improve the model performance to use tensorflow
categorical_column_with_identity instead of numeric_columns for DOW
which indicates days of week?
This involves when we should choose categorical or choose numeric for feature engineering. A very simple rule is to choose numeric if there is a significant difference between big and small in the internal comparison of your feature. If the feature does not have bigger or smaller significance, you should choose categorical. So I tend to choose categorical_column_with_identity for feature DOW.
I also want to know if it's useful to merge quarter and hour as a
single column like day time (quarter is minutes in an hour which is
going to be normalized between 0 and 1)?
Cross features may bring some benefits such as latitude and longitude features. I recommend you to use tf.feature_column.crossed_column(link) here. It returns a column for performing crosses of categorical features. You can also continue to retain features quarter and hour in model at the same time, .
A similar error occurred to me:
Failed to convert object of type <class 'tensorflow.python.autograph.operators.special_values.Undefined'> to Tensor.
It occurred in a tf.function when I tried to use a variable that I had not assigned before.
To debug this, you have to remove tf.function from the method ;-)
Starting from the Tensorflow CNN example, I'm trying to modify the model to have multiple images as an input (so that the input has not just 3 input channels, but multiples of 3 by stacking images).
To augment the input, I try to use random image operations, such as flipping, contrast and brightness provided in TensorFlow.
My current solution to apply the same random distortion to all input images is to use a fixed seed value for these operations:
def distort_image(image):
flipped_image = tf.image.random_flip_left_right(image, seed=42)
contrast_image = tf.image.random_contrast(flipped_image, lower=0.2, upper=1.8, seed=43)
brightness_image = tf.image.random_brightness(contrast_image, max_delta=0.2, seed=44)
return brightness_image
This method is called multiple times for each image at graph construction time, so I thought for each image it will use the same random number sequence and consequently, it will result in have the same applied image operations for my image input sequence.
# ...
# distort images
distorted_prediction = distort_image(seq_record.prediction)
distorted_input = []
for i in xrange(INPUT_SEQ_LENGTH):
distorted_input.append(distort_image(seq_record.input[i,:,:,:]))
stacked_distorted_input = tf.concat(2, distorted_input)
# Ensure that the random shuffling has good mixing properties.
min_queue_examples = int(num_examples_per_epoch *
MIN_FRACTION_EXAMPLES_IN_QUEUE)
# Generate a batch of sequences and prediction by building up a queue of examples.
return generate_sequence_batch(stacked_distorted_input, distorted_prediction, min_queue_examples,
batch_size, shuffle=True)
In theory, this works fine. And after doing some test runs, this really seemed to solve my problem. But after a while, I found out that I'm having a race-condition, because I use the input pipeline of the CNN-example code with multiple threads (which is the suggested method in TensorFlow to improve performance and reduce memory consumption at runtime):
def generate_sequence_batch(sequence_in, prediction, min_queue_examples,
batch_size):
num_preprocess_threads = 8 # <-- !!!
sequence_batch, prediction_batch = tf.train.shuffle_batch(
[sequence_in, prediction],
batch_size=batch_size,
num_threads=num_preprocess_threads,
capacity=min_queue_examples + 3 * batch_size,
min_after_dequeue=min_queue_examples)
return sequence_batch, prediction_batch
Because multiple threads create my examples, it is not guaranteed anymore that all image operations are performed in the right order (in sense of the right order of random operations).
Here I came to a point where I got completely stuck. Does anyone know how to solve this problem to apply the same image distortion to multiple images?
Some thoughts of mine:
I thought about to do some synchronizations arround these image distortion methods, but I could find anything provided by TensorFlow
I tried to generate to generate a random number for e.g. the random brightness delta using tf.random_uniform() by myself and use this value for tf.image.adjust_contrast(). But the result of the TensorFlow random generator is always a tensor, and I have not found a way to use this tensor as a parameter for tf.image.adjust_contrast() which expects a simple float32 for its contrast_factor parameter.
A solution that would (partly) work would be to combine all images to a huge image using tf.concat(), apply random operations to change contrast and brightness, and split the image afterwards. But this would not work for random flipping, because this would (at least in my case) change the order of the images, and there is no way to detect whether tf.image.random_flip_left_right() has performed a flip or not, which would be required to fix the wrong order of images if necessary.
Here is what I came up with by looking at the code of random_flip_up_down and random_flip_left_right within tensorflow :
def image_distortions(image, distortions):
distort_left_right_random = distortions[0]
mirror = tf.less(tf.pack([1.0, distort_left_right_random, 1.0]), 0.5)
image = tf.reverse(image, mirror)
distort_up_down_random = distortions[1]
mirror = tf.less(tf.pack([distort_up_down_random, 1.0, 1.0]), 0.5)
image = tf.reverse(image, mirror)
return image
distortions = tf.random_uniform([2], 0, 1.0, dtype=tf.float32)
image = image_distortions(image, distortions)
label = image_distortions(label, distortions)
I would do something like this using tf.case. It allows you to specify what to return if certain condition holds https://www.tensorflow.org/api_docs/python/tf/case
import tensorflow as tf
def distort(image, x):
# flip vertically, horizontally, both, or do nothing
image = tf.case({
tf.equal(x,0): lambda: tf.reverse(image,[0]),
tf.equal(x,1): lambda: tf.reverse(image,[1]),
tf.equal(x,2): lambda: tf.reverse(image,[0,1]),
}, default=lambda: image, exclusive=True)
return image
def random_distortion(image):
x = tf.random_uniform([1], 0, 4, dtype=tf.int32)
return distort(image, x[0])
To check if it works.
import numpy as np
import matplotlib.pyplot as plt
# create image
image = np.zeros((25,25))
image[:10,5:10] = 1.
# create subplots
fig, axes = plt.subplots(2,2)
for i in axes.flatten(): i.axis('off')
with tf.Session() as sess:
for i in range(4):
distorted_img = sess.run(distort(image, i))
axes[i % 2][i // 2].imshow(distorted_img, cmap='gray')
plt.show()