Is it possible to divide multiple features at once in ArcGIS Pro? - arcgis

I have a hydrographic network layer in ArcGIS Pro and I would like to divide all of the features within it that are more than 1km long so that the layer only contains features 1km long and shorter. This is easy enough to do one feature at a time with the Divide tool under Modify Features but the layer contains 16,467 features greater than 1km and the Divide tool will only accept one feature at a time. Is there a way to divide multiple features at once within ArcGIS Pro?
Thank you very much to anyone who can help.

Related

What is the limit on creating classifiers for one instance of Visual recognition service

I would like to know how many maximum classifiers can be created for given instance of standard plan of IBM visual recognition service.
u can create maximum 100 classifiers for given instance of standard plan of IBM visual recognition service.
There is not a hard limit on the number of classifiers you can create. If you create many (like dozens) and try to use them all in a single /classify request at the same time, you may experience higher latency, though. However, you can work around this by sending requests in parallel to use a number of classifiers which gives an acceptable amount of latency.

Using Pyradiomics to calculate shape features on meshes instead of matrices?

I am trying to compute some mesh features for 3D models that I created using numpy-stl. I would like to compute all of the features given within pyradiomics, but I am not sure how to use them on just the meshes without them having all of the extra binary image, and matrix information? Unless there is a better program t use for shape feature extraction? Also, in the documentation, it says that there are some features you need to enable C extensions for. How can you do that in your python script?
C extensions are enabled by default. As of PyRadiomics 2.0, the python equivalents for those functions have been remove (horrible performance).
As to your meshes. PyRadiomics is build to extract features from images and binary labelmaps. To use meshes you would have to first convert them.
What features do you want to extract? PyRadiomics does use a sort of on-the-fly built mesh to calculate surface area and volume, which are also used in the calculation of several other shape features.
If you want to take a look at how volume and SA are calculated, the source code for that is in C (radiomics/src/cshape.c). The calculation of the derived features (e.g. sphericity) is in shape.py

analysis Fitbit walking and sleeping data

I'm participating in small data analysis competition in our school.
We use Fitbit wearable devices, which is loaned to each participants by host of contest.
For 2 months during the contest, they walk and sleep with this small device 24/7,
allow it to gather data about participant's walk count with heart rate(bpm), etc.
and we need to solve some problems based on these participants' data
like, example,
show the relations between rainy days and participants' working out rate using the chart,
i think purpose of problem is,
because of rain, lot of participants are expected to be at home.
can you show some cause and effect numerically?
i'm now studying python library numpy, pandas with ipython notebook.
but still i have no idea about solving these problems..
could you recommend some projects or sites use for references? i really eager to win this competition.:(
and lastly, sorry for my poor English.
Thank you.
that's a fun project. I'm working on something kind of similar.
Here's what you need to do:
Learn the fitbit API and stream the data from the fitbit accelerometer and gyroscope. If you can combine this with heart rate data, great. The more types of data you have, the more effective your algorithm will be. You can store this data in a simple csv file (streaming the accel/gyro data at 50Hz is recommended). Or setup a web server and store it in a database for easy access
Learn how to use pandas and scikit learn
[optional but recommended]: Learn matplotlib so you can graph you data and get a feel for how it looks
Load the data into pandas and create features on the data - notably using 1-2 second sliding window analysis with 50% overlap. Good features include (for all three Accel X, Y, Z): max, min, standard deviation, root mean square, root sum square and tilt. Polynomials will help.
Since this is a supervised classification problem, you will need to create some labelled data - so do this manually (state 1 = rainy day, state 2 = non-rainy day) and then train a classification algorithm. I would recommend a random forest
Test using unlabeled data - don't forget to use cross validation
Voila, you now have a highly accurate model and will win the competition. Plus you've learned about a bunch of really cool Python and machine learning stuff.
For more tutorials on how all this stuff works, I'd highly recommend the Kaggle tutorial projects
BONUS: If you want to take it to a new level, you can start adding smoothers on top of your classifier, for example by using a Hidden Markov Model as explained in this talk
BONUS 2: Go get a PhD in Human Activity Recognition.

Scaling up SURF lookups

I am currently trying to recognise DVD covers in generic photos. My initial test involved using 100 DVD covers and 10 test cases of photos that contained them, and with some tweaking of the find_obj.cpp example in OpenCV I was able to get recognition working.
However now I need to do this on a much larger database, and I am aware that the FLANN method will not scale up well to meet this requirement. How do people here recommend I scale up my SURF recognition in an SQL database?
If you really want to scale your system to several orders of magnitude, nearest neighbors search (FLANN) will not be sufficient.
In such a case what you need is to build a visual vocabulary (a.k.a bag of words) by quantizing your descriptors, and create an inverted index.
I recommend you to refer to the Scalable Recognition with a Vocabulary Tree paper that is the reference publication for such a topic.

Retrieving the most significant features gained from SIFT / SURF

I'm using SURF to extract features from images and match them to others. My Problem is that some images have in excess of 20000 features which slows down matching to a crawl.
Is there a way I can extract only the n most significant features from that set?
I tried computing MSER for the image and only use features that are within those regions. That gives me a reduction anywhere from 5% to 40% without affecting matching quality negatively, but that's unreliable and still not enough.
I could additionally size the image down, but I that seems to affect the quality of features severely in some cases.
SURF offers a few parameters (hessian threshold, octaves and layers per octave) but I couldn't find anything on how changing these would affect feature significance.
After some researching and testing I have found that the Hessian value for each feature is a rough estimate of it's strength, however using the top n features sorted by the hessian is not optimal.
I achieved better results when doing the following until number of features is below the target of n:
Size the image down, if it is overly large
Only features that lie in MSER regions are considered
For features that lie very close to each other, only the feature with the higher hessian is considered
Of the n features per image that I want to save, 75% are the features with the highest hessian values
The remaining features are taken randomly from the remainder, weighted by distribution of the hessian values computed through a histogram
Now I only need to find a suitable n, but around 1500 seems enough currently.