I have some problems when I was practicing how to use xgboost.
As I know, the "DMatrix" is a special internal structure that makes the model run faster.
Here's the problem:
To tune the model, (I guess) GridSearchCV or RandomizedSearchCV are considerable.
With the code below:
params = {
'min_child_weight': [1, 5, 10],
'gamma': [0.5, 1, 1.5, 2, 5],
'subsample': [0.6, 0.8, 1.0],
'colsample_bytree': [0.6, 0.8, 1.0],
'max_depth': [3, 4, 5]
}
random_search = RandomizedSearchCV(xgb, param_distributions=params, n_iter=param_comb, scoring='roc_auc', n_jobs=4, cv=skf.split(X,Y), verbose=3, random_state=1001 )
I can also do the cross validation by passing cv. That was great.
However, it really takes time (almost 40 mins with big data and colab gpu) and I really want to improve it.
After I transform my train data to DMatrix:
xgbtrain = xgb.DMatrix(train_x, train_y)
I'm not knowing what to do next because the .fit requires X and y..
How to do that? Or any way to make it faster?
Thanks
This question is pretty old, so I suspect you may have already found an answer. XGBoost can be tricky to navigate the different options when incorporating CV or parameter tuning.
Instead of using xgb.fit() you can use xgb.train() to utilize the DMatrix object. Additionally, XGB has xgb.cv() for performing a cross validation. I myself am hoping to find an alternative to GridSearchCV, but I don't think there is one. The best method may be to create a loop of xgb.cv() to compare evaluation results and identify the best performing parameters.
XGB has really helpful documentation, you may want to check outXGB Python Intro: Training and Cross Validation Demo
Try Optuna for hyperparameter tuning of XGBoost, much much faster, and use gpu (tree_method = gpu_hist). Kaggle has free GPU every week.
Related
Given I have the number of axes, can I specify the number of axes to the type hint npt.NDArray (from import numpy.typing as npt)
i.e. if I know it is a 3D array, how can I do npt.NDArray[3, np.float64]
On Python 3.9 and 3.10 the following does the job for me:
data = [[1, 2, 3], [4, 5, 6]]
arr: np.ndarray[Tuple[Literal[2], Literal[3]], np.dtype[np.int_]] = np.array(data)
It is a bit cumbersome, but you might follow numpy issue #16544 for future development on easier specification.
In particular, for now you must declare the full shape and can't only declare the rank of the array.
In the future something like ndarray[Shape[:, :, :], dtype] should be available.
There is a need to export a CNN computational graph from Tensorbaord as Panda dataframe.
I have looked at https://www.tensorflow.org/tensorboard/dataframe_api and only training information is logged (because of defining a callback function during the training process).
Is there any way to log the network architecture & weights in the logs then extract it as a panda dataframe!
The last time I tried doing this using the source you mentioned, it didn't go well. I found out that I couldn't use the ExperimentFromDev(not so sure now) which was used in the tutorial. I instead manually read the TB log files using the method of this question. The second answer could be the solution in your case.
ea = event_accumulator.EventAccumulator('events.out.tfevents.x.ip-x-x-x-x',
size_guidance={ # see below regarding this argument
event_accumulator.COMPRESSED_HISTOGRAMS: 500,
event_accumulator.IMAGES: 4,
event_accumulator.AUDIO: 4,
event_accumulator.SCALARS: 0,
event_accumulator.HISTOGRAMS: 1,
})
pd.DataFrame(ea.Scalars('Loss)).to_csv('Loss.csv')
Is there any way in federated-tensorflow to make clients train the model for multiple epochs on their dataset? I found on the tutorials that a solution could be modifying the dataset by running dataset.repeat(NUMBER_OF_EPOCHS), but why should I modify the dataset?
The tf.data.Dataset is the TF2 way of setting this up. It maybe useful to think about the code as modifying the "data pipeline" rather than the "dataset" itself.
https://www.tensorflow.org/guide/data and particularly the section https://www.tensorflow.org/guide/data#processing_multiple_epochs can be useful pointers.
At a high-level, the tf.data API sets up a stream of examples. Repeats (multiple epochs) of that stream can be configured as well.
dataset = tf.data.Dataset.range(5)
for x in dataset:
print(x) # prints 0, 1, 2, 3, 4 on separate lines.
repeated_dataset = dataset.repeat(2)
for x in repeated_dataset:
print(x) # same as above, but twice
shuffled_repeat_dataset = dataset.shuffle(
buffer_size=5, reshuffle_each_iteration=True).repeat(2)
for x in repeated_dataset:
print(x) # same as above, but twice, with different orderings.
I would like to feed my model with stride-1 windows from a very long data sequence (tens of millions of entries). This is similar to the aim presented in this thread, only that my data sequence may contain several features to begin with, so the final number of features is n_features * window_size. i.e. with two original features and a window size of 3, this would mean transforming this:
[[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]]
to:
[[1, 2, 3, 6, 7, 8], [2, 3, 4, 7, 8, 9], [3, 4, 5, 8, 9, 10]]
I was trying to use slicing with map_fn or Dataset.map, applied to a sequence of indices (per the answer in the above-mentioned thread), as in:
ti = tf.range(data.shape[0] - window_size)
train_dataset = tf.data.Dataset.from_tensor_slices((ti, labels))
def get_window(l, label):
wnd = tf.reshape(data_tensor[l:(l + window_size), :], (-1, window_size * n_features))
wnd = tf.squeeze(wnd)
return (wnd, label)
train_dataset = train_dataset.map(get_window)
train_dataset = train_dataset.batch(batch_size)
...
This is working in principle, but the training is extremely slow, with minimal GPU utilization (1-5%, probably in part because the mapping is done in the CPU).
When trying to do the same with tf.map_fn, the graph building becomes very lengthy, with tremendous memory utilization.
Another option I was trying is to transform all of the data in advance before I load it in Tensorflow. This works much faster (even when considering the pre-processing time, I wonder why - shouldn't it be the same operation as the mapping during training?) but is very inefficient in terms of memory and storage, as the data becomes window_size-fold larger. That is a deal-breaker for my large datasets.
I thought about splitting these transformed bloated datasets into several files ("hyper-batches") and go through them in sequence for each epoch, but this seems very inefficient and I was wondering if there is a better way to achieve this simple transformation.
Suppose I have a tensor:
A=[[1,2,3],[4,5,6]]
Which is a matrix with 2 rows and 3 columns.
I would like to replicate it, suppose twice, to get the following tensor:
A2 = [[1,2,3],
[1,2,3],
[4,5,6],
[4,5,6]]
Using tf.repmat will clearly replicate it differently, so I tried the following code (which works):
A_tiled = tf.reshape(tf.tile(A, [1, 2]), [4, 3])
Unfortunately, it seems to be working very slow when the number of columns become large. Executing it in Matlab using Kronecker product with a vector of ones (Matlab's "kron") seems to be much faster.
Can anyone help?