HEC-HMS Gridded curve number - dss

I'm currently working with HEC-HMS and i'm trying to use the gridded SCS Curve Number. I arrive to import my raster Curve Number as DSS file in HEC-HMS. But when I land my compute it doesn't work. The message error is the following : "ERROR 40509: No grid cells for gridded subbasin "S_1"."
But i'm sure my DSS file got value. I'm sharing a picture of my DSS file with the publication.
Thanks in advance for answering me. enter image description here
Have a nice day 😉
It tried different gridded curve number but I always got the same error. I also tried on 2 vesrion of HEC-HMS (4.7 and 4.10) but the same problem appear.

Related

XGBoost API's plot_importance displays unknown feature, `unnamed : 0`

XGBoost native API's plot_importance displays an unknown feature, unnamed: 0, on the top of the chart.
Here is the output image.
Feature Importance Ranking
I checked all the columns in the original dataframe input into DMatrix and confirmed that there is no unknown feature left in it. I also removed the key ID as well.
So, I confirmed that the original dataset did not include any unspecified feature in its columns.
My code of plot_importance is here.
`
plot_importance(pw_model_1, max_num_features=10)
pyplot.savefig('plot.png')
pyplot.show()
`
Here pw_model_1 is the selected model after hyperparameter tuning.
I would appreciate it if anyone can advise me how to resolve this issue.
Thank you
Best regards
Michio

Feature Selection for Text Classification with Information Gain in R

I´m trying to prepare my dataset ideally for binary document classification with an SVM algorithm in R.
The dataset is a combination of 150171 labelled variables and 2099 observations stored in a dataframe. The variables are a combination uni- and bigrams which were retrieved from a text dataset.
When I´m trying to calculate the Information gain as a feature selection method, the Error "cannot allocate vector of size X Gb" occurs although I already extended my memory and I´m running on a 64-bit operating system. I tried the following package:
install.packages("FSelector")
library(FSelector)
value <- information.gain(Usefulness ~., dat_SentimentAnalysis)
Does anybody know a solution/any trick for this problem?
Thank you very much in advance!

Cartopy aliasing

I have the following issue:
When I transform from one map projection to another using Cartopy, the output picture displays a quite ugly aliasing with "steps" larger than one pixel. I attach the input and output pictures as example.
Input - PlateCarree:
Output - Transformed:
Could anyone explain me why that happens? Is it possible to correct it?

How to refine the Graphcut cmex code based on a specific energy functions?

I download the following graph-cut code:
https://github.com/shaibagon/GCMex
I compiled the mex files, and ran it for pre-defined image in the code (which is rgb image)
I wanna optimize the image segmentation results,
I have probability map of the image, which its dimension is (width,height, 5). Five probability distribution over the image dimension are stacked together. each relates to one the classes.
My problem is which parts of code should according to the probability image.
I want to define Data and Smoothing terms based on my application.
My question is:
1) Has someone refined the code according to the defining different energy function (I wanna change Unary and pair-wise formulation).
2) I have a stack of 3D images. I wanna define 6-neighborhood system, 4 neighbors in current slice and the other two from two adjacent slices. In which function and part of code can I do the refinements?
Thanks

Why do i get the message ' Line search fails in two-class probability estimates' when using libsvm for binary classification'?

I am suddenly facing the problem wherein i get the message ' Line search fails in two-class probability estimates' when using libsvm for binary classification' when classifying the test images . The training database is of size 2000 and test database is of size 2000. The feature vector size is 200. However, for another feature of size 256, the problem does not arise. Even for other large size features the problem didnot occur.Suddenly the problem has occured. I am using LIBSVM and it is binary classification. What can be the possible reason? Please help asap.Thanks in advance.
I have tried the solution suggested in earlier similar question,but of no use.