How can I create shapes in Fusion 360 via API or command line? - api

I am looking to utilize Autodesk Fusion 360 to generate a huge number of shapes (tens of thousands) in 3D model form, so I need a way to operate it non-interactively.
I am aware of the Fusion 360 API documentation here https://autodeskfusion360.github.io/
But I was wondering if there is a known way to do this.

Essentially there are four main steps to create:
Create base shape (i.e. sphere, cube, trapezoid etc..) and save
Define and name each dimension on the shape as either dependent on another (ex: height/2) or an input (ex: height) until your model is fully constrained and controlled by parameters
Create a program (python is friendly) that inputs values to named inputs
Connect to a database within the program (using odbc or similar) that will iterate through shape characteristics
Input necessary characteristics from database and save unique shape within your program
You will have to be more specific in the question to clarify the answer. Please include as detailed an example as you can and I'll edit my answer.

Related

How to create image and/or graph in Microsoft Access 2010

I am porting an app over from Delphi (Pascal form with an Access database) to operate strictly in Access. I have already done all the SQL and data handling successfully; now I need to present it graphically. The form features a full-version graph, and then a zoomed-in subset of that graph:
I attempted to reproduce the graph as a CHART, and as an Excel object. (Although I did not succeed with either of these approaches, I acknowledge that the solution may be in there somewhere.)
Ultimately, I did reproduce the full graph (at center right) - but to do so, I had to create hundreds of individual picture elements, and I ran up against the "number of objects limit" before I could complete the "ZOOMBOX"... Clearly the wrong approach.
I have plotted the elements of the graph (a subset of which would, of course, be the zoombox) into a table, which could be used as a sort of "paint-by-numbers" guide.

C5.0 gives back only a single leaf

I'm doing a data analysis task in SPSS Modeler and I have finally arrived to the point of the stream where I'm trying to fit some models on the data.
However when I tried to run the mentioned c5.0 modeling node on my data, the node generated a modeling nugget containing only a single leaf, so there are no decision rules in the model. I partitioned the data before to train and test subsets (70-30). I did not use misclassification cost, used the properly predefined attribute roles. In the model's model page I checked the use partitioned data, build model for each split, Group symbolics, Use global pruning options in, I also tried to use expert mode, but it fails on simple mode too. I have tried to use different options but it gives the same output without a single split.
How can I make the model give back a more complex decision tree, I suppose that this is not the expected outcome.
Any suggestions are welcomed.
Please, check your distribution of the target variable and share it.
If the balances differs greatly from 50%-50%, you may need to balance your inputs first.
Missclassification cost is another technique to give you an output, but again it should be based on your empirical distributions.

training images? Considerations for selection

I'm relatively new and am still learning the basics. I've used NVIDIA DIGITS in the past, and am now looking at Tensorflow. While I've been able to fumble my way around creating some models for a few projects I'm working on, I really want to start diving deeper into what I'm doing, how I'm doing it, and ultimately a better understanding of why.
One area that I would like to start with is the Images that I'm using for training and testing. Can anyone point me to a blog, an article, a paper, or give me some insight in what I need to consider when selecting images to train a new model on. Up until recently, I've been using datasets that have already been selected and that are available for download. Lets say I'm going to start working on a project that involves object detection of ships from a variety of distances and angles.
So my thoughts would be
1) I need a large quantity of images.
2) The images need to contain ships of the different types I would like to detect. (lets just say one class, ships, don't care what type of ships)
3) I also need to have images that have a great variety of distance perspective for the different types of ships.
Ultimately, my thoughts are that the images need to reflect the distance, perspective, and types of ships I would ideally want to identify from the video. Seems simple enough.
However, there are a number of questions
Does the images need to be the same/similar resolution as the camera I'll be using, for best results?
Does the images all need to be the same resolution?
Can I use a single image and just digitally zoom out on the image to give the illusion of different distances?
I'm sure there are a number of other questions that I'm not asking, or should be asking. Are there any guide lines available for creating a solid collection of images to use when creating the collection of images for training and validation?
I recommend thinking through end to end, like would you need to classify ship models as a next step? I recommend going through well known public datasets and actually work with the structure, how to store data, labels, how to handle preprocessing etc.
More importantly, what are you trying to achieve? Talking to experts in the topic does help greatly while preparing your own dataset.
Use open source images if you can, e.g. flickr, google, imagenet.
No, you don't need them to be the same resolution.
It is not ideal to zoom in/out images to use in different categories. Preprocessing images and data augmentation already does this to create more distant representations of the same class. This is why I would recommend hands on approach with an existing dataset first.
Yes, what you need is many, different representations of classes, and a roughly balanced dataset of classes. If you define your data structure well in the beginning, it will save you a ton of time as you won't have to make changes often.

Full Page Text Recognition Dataset Creation

I have been reading OCR papers such as this one https://arxiv.org/pdf/1704.08628.pdf , and I am have trouble finding out how these datasets are actually generated.
In the linked paper, they use a regressor to predict the start location (a point) and height of a line of text. Then, based on that starting point and height, a second network performs OCR and end of line detection. I realize this is a very simplified explanation, but it follows that their dataset consists (at least in part) of full page text 'images' annotated with where each line begins, and then a transcription of the text on a given line. Alternatively, they could have just used the lower left point of bounding boxes as the start point and the height of the box as the word height (avoiding the need to re-annotate if the data was previously prepared using bounding boxes).
So how is a dataset like this actually created? Looking at other datasets it seems like there is some software that can create XML files containing the ground truths relevant to each image, can someone point me in the right direction? I've been googling around and finding lots of tools for annotating text with sentiment etc and other tools for annotating images for segmentation (for something like a YOLO network), but I'm coming up empty for the creation something like the Maurdoor dataset used in the linked paper.
Thank you
So after submitting this, the related threads window showed me many threads that my googling did not turn up. This http://www.prima.cse.salford.ac.uk/tools software seems to be what I was looking for, but I would still love to hear other ideas.

Is there multi-dimensional spatial pooler for NuPIC?

I'm currently working on some image/video recognition systems. I'd like to build it based on NuPIC. But I cannot find a multi-dimensional spatial pooler, which is very important in the vision domain. Should I implement multi-dimensional SP by myself?
With this recent change to the SP, it now supports any number of dimensions. Just set inputDimensions and columnDimensions to any number of dimensions you want (make sure they are the same, though), and the SP will respect the topology of the input and column space.