3D CAD Graphics Framework on OS X - Suggestions? - objective-c

I am looking to implement a 3D model viewer in my application. The application uses a series of interlinked plug-in objects, with user attributes, to contribute to the 3D form. (Basically a parametric design tool).
The plug-ins must communicate via a common, simple, protocol as they may also be user contributed and so I am looking for a suitable high level library/framework to generate the 3D model, preferably using a cocoa, that could either be exposed directly to the plug-in or via a traceable translation in the main program (allowing plug-ins to modify their contribution to the model)
It should be able to generate 3D forms from standard planes, surfaces and Boolean operations.
Does anyone have any experience with any such frameworks, such as perhaps Coin3D, and could advise suitability?
The icing on the cake would be reliable calculation of volumes and areas, for scientific calculations (Buoyancy, stability etc).
I have not decided on the distribution model, and would welcome suggestions of any licence format, but if the application is paid for it would be sub £30, and I do not have the capital to invest in expensive licensing seats.

Although not a MacOS Framework, why not get started with the OpenSCAD.

Related

Google Poly alternative for 3D models API

Inevitably (hasn't been updated since 2018) though sadly, Google Poly is closing down. :-(
Is there another 3D mesh API available? Ideally one that also works "out of the box" with Unity? Ideally with the option to filter by low-poly models and such...
If you want free or non free 3d models for unity I want to suggest these websites:
1. Cults 3D
Cults offers its users the perfect range of 3D models – from maker-inspired 3D files all the way to professional high-quality designs. Cults checks each 3D design for printability and organizes them into different groups such as fashion, art, jewelry, home, architecture, or gadgets. The mix of a modern visual interface, a well-arranged database, and a focus on smart, useful & beautiful designs makes browsing through their website a lot of fun. While many models come for free, others come at very affordable prices.
2. Pinshape
Pinshape offers its visitors the opportunity to browse through a great selection of more than 13,000 (free and payable) STL files. Finding great 3D printable models on the website is child’s play: both the visual representation and logical organization of the website are top-notch.
3. 3DShook
The 3DShook website is somewhat similar to Pinshape and Cults but the designs tend to be more focused on ‘fun’ 3D prints for hobbyists. Some models are free but most require a fee. However, 3DShook does offer designs at a very competitive price.
4. Thingiverse
Thingiverse is probably one of the biggest and most popular databases. It has a very active maker community behind it and offers free-to-use STL files only. You don’t even need to open an account in order to download a 3D model from their site. Sometimes the database can seem slightly less organized than the cleaner and simpler design of sites like Pinshape and Cults.
5. GrabCAD
GrabCAD is different than the databases we have looked at so far. Firstly, GrabCad provides you with technical, engineering, and scale models only. Secondly, it lets you filter its database based on the 3D modeling software that the designs were created in. It’s the place to be for anyone looking for more than 27,000 technical 3D files. However, take into account that this website is not intended for 3D printing.
6. 3D Warehouse
The 3D Warehouse simply screams ‘geometrical’. Whether you are looking for architecture, product design or scale models, 3D Warehouse offers anything that was created with the popular 3D modeling software SketchUp. Luckily they also let you filter their database for 3D printable models by selecting ‘Only Show Printable Models’ in their advanced search function. All other models can be made printable thanks to a connection with the 3DPrintCloud.
7. CGTrader
CGTrader offers a dedicated database for 3D printable objects. So far there are more than 13,000 models to choose from. We noticed that there are many printable jewelry designs in particular on this website. While many models are downloadable for free, others come at affordable prices.
8. TurboSquid
TurboSquid is the place to be for downloading the most stunning 3D designs. It doesn’t get any more high-end and professional than this. The problems: the designs are great visually but are not optimized for 3D printing. There is also no filtering option for finding 3D printable models. Furthermore, all models on TurboSquid are premium (payable) files. Quality comes at a price.
9. 3DExport
3DExport is somewhat similar to TurboSquid: This is also a database that focuses on the visual aspects and offers amazing premium 3D models. In addition to this, 3DExport offers its users a filter for finding 3D printable models only.
10. Yeggi
Last but not least we want to mention Yeggi, a search engine for 3D printable models. Yeggi scans all the databases mentioned above, and many more, for 3D printable files. So if you want to search the ‘Google’ of 3D models, this might be the right website for you.
11. Remix 3D
Remix 3D is more than a 3D model database, is the Microsoft community of modelers who want to share their creations. The models on the libraries are organized by categories and they can be used and of course “remixed” on Microsoft Paint 3D and their VR/AR app, Mixed Reality.
source

How can i create a 3D modeling app? What resources i will required?

I want to create a application which converts 2d-images/video into a 3d model. While researching on it i found out similar application like Trnio, Scann3D, Qlone,and few others(Though few of them provide poor output 3D model). I also find out about a technology launched by the microsoft research called mobileFusion which showed the same vision i was hoping for my application but these apps were non like that.
Creating a 3D modelling app is complex task, and achieving it to a high standard requires a lot of studying. To point you in the right direction, you most likely want to perform something called Structure-from-Motion(SfM) or Simultaneous Localization and Mapping (SLAM).
If you want to program this yourself OpenCV is a good place to start if you know C++ or Python. A typical pipeline involves; feature extraction and matching, camera pose estimation, triangulation and then optimised using a bundle adjustment. All pipelines for SfM and SLAM follow these general steps (with exceptions of course). All of these steps are possible is OpenCV although Googles Ceres Solver is an excellent open-source bundle adjustment. SfM generally goes onto dense matching which is where you get very dense point clouds which are good for creating meshes. A free open-source pipeline for this is OpenSfM. Another good source for tools is OpenMVG which has all of the tools you need to make a full pipeline.
SLAM is similar to SfM, however, has more of a focus on real-time application and less on absolute accuracy. Applications for this is more centred around robotics where a robot wants to know where it is relative to its environment, but it not so concerned on absolute accuracy. The top SLAM algorithms are ORB-SLAM and LSD-SLAM. Both are open-source and free for you to implement into your own software.
So really it depends what you want... SfM for high accuracy, SLAM for real-time. If you want a good 3D model I would recommend using existing algorithms as they are very good.
The best commercial software in my opinion... Agisoft Photoscan. If you can make anything half as good as this i'd be very impressed. To answer your question what resources will you require. In my opinion, python/c++ skills, the ability to google well and a spare time to read up on photogrammetry and SfM properly.

Robot odometry in labview

I am currently working on a (school-)project involving a robot having to navigate a corn field.
We need to make the complete software in NI Labview.
Because of the tasks the robot has to be able to perform the robot has to know it's position.
As sensors we have a 6-DOF IMU, some unrealiable wheel encoders and a 2D laser scanner (SICK TIM351).
Until now I am unable to figure out any algorithms or tutorials, and thus really stuck on this problem.
I am wondering if anyone ever attempted in making SLAM work in labview, and if so are there any examples or explanations to do this?
Or is there perhaps a toolkit for LabVIEW that contains this function/algorithm?
Kind regards,
Jesse Bax
3rd year mechatronic student
As Slavo mentioned, there's the LabVIEW Robotics module that contains algorithms like A* for pathfinding. But there's not very much there that can help you solve the SLAM problem, that I am aware of. The SLAM problem consist of the following parts: Landmark extraction, data association, state estimation and updating of state.
For landmark extraction, you have to pick one or multiple features that you want the robot to recognize. This can for example be a corner or a line(wall in 3D). You can for example use clustering, split and merge or the RANSAC algorithm. I believe your laser scanner extract and store the points in a list sorted by angle, this makes the Split and Merge algorithm very feasible. Although RANSAC is the most accurate of them, but also has a higher complexity. I recommend starting with some optimal data points for testing the line extraction. You can for example put your laser scanner in a small room with straight walls and perform one scan and save it to an array or a file. Make sure the contour is a bit more complex than just four walls. And remove noise either before or after measurement.
I haven't read up on good methods for data association, but you could for example just consider a landmark new if it is a certain distance away from any existing landmarks or update an old landmark if not.
State estimation and updating of state can be achieved with the complementary filter or the Extended Kalman Filter (EKF). EKF is the de facto for nonlinear state estimation [1] and tend to work very well in practice. The theory behind EKF is quite though, but it should be a tad easier to implement. I would recommend using the MathScript module if you are going to program EKF. The point of these two filters are to estimate the position of the robot from the wheel encoders and landmarks extracted from the laser scanner.
As the SLAM problem is a big task, I would recommend program it in multiple smaller SubVI's. So that you can properly test your parts without too much added complexity.
There's also a lot of good papers on SLAM.
http://www.cs.berkeley.edu/~pabbeel/cs287-fa09/readings/Durrant-Whyte_Bailey_SLAM-tutorial-I.pdf
http://ocw.mit.edu/courses/aeronautics-and-astronautics/16-412j-cognitive-robotics-spring-2005/projects/1aslam_blas_repo.pdf
The book "Probabalistic Robotics".
https://wiki.csem.flinders.edu.au/pub/CSEMThesisProjects/ProjectSmit0949/Thesis.pdf
LabVIEW provides LabVIEW Robotics module. There are also plenty of templates for robotics module. Firstly you can check the Starter Kit 2.0 template Which will provide you simple working self driving robot project. You can base on such template and develop your own application from working model, not from scratch.

Automated Design in CAD, Analysis in FEA, and Optimization

I would like to optimize a design by having an optimizer make changes to a CAD file, which is then analyzed in FEM, and the results fed back into the optimizer to make changes on the design based on the FEM, until the solution converges to an optimum (mass, stiffness, else).
This is what I envision:
create a blueprint of the part in a CAD software (e.g. CATIA).
run an optimizer code (e.g. fmincon) from within a programming language (e.g. Python). The parameters of the optimizer are parameters of the CAD model (angles, lengths, thicknesses, etc.).
the optimizer evaluates a certain design (parameter set). The programming language calls the CAD software and modifies the design accordingly.
the programming language extracts some information (e.g. mass).
then the programming language extracts a STEP file and passes it a FEA solver (e.g. Abaqus) where a predefined analysis is performed.
the programming language reads the results (e.g. max van Mises stress).
the results from CAD and FEM (e.g. mass and stress) are fed to the optimizer, which changes the design accordingly.
until it converges.
I know this exists from within a closed architecture (e.g. isight), but I want to use an open architecture where the optimizer is called from within an open programming language (ideally Python).
So finally, here are my questions:
Can it be done, as I described it or else?
References, tutorials please?
Which softwares do you recommend, for programming, CAD and FEM?
Yes, it can be done. What you're describing is a small parametric structural sizing multidisciplinary optimization (MDO) environment. Before you even begin coding up the tools or environment, I suggest doing some preliminary work on a few areas
Carefully formulate the minimization problem (minimize f(x), where x is a vector containing ... variables, subject to ... constraints, etc.)
Survey and identify individual tools of interest
How would each tool work? Input variables? Output variables?
Outline in a Design Structure Matrix (a.k.a. N^2 diagram) how the tools will feed information (variables) to each other
What optimizer is best suited to your problem (MDF?)
Identify suitable convergence tolerance(s)
Once the above steps are taken, I would then start to think MDO implementation details. Python, while not the fastest language, would be an ideal environment because there are many tools that were built in Python to solve MDO problems like the one you have and the low development time. I suggest going with the following packages
OpenMDAO (http://openmdao.org/): a modern MDO platform written by NASA Glenn Research Center. The tutorials do a good job of getting you started. Note that each "discipline" in the Sellar problem, the 2nd problem in the tutorial, would include a call to your tool(s) instead of a closed-form equation. As long as you follow OpenMDAO's class framework, it does not care what each discipline is and treats it as a black-box; it doesn't care what goes on in-between an input and an output.
Scipy and numpy: two scientific and numerical optimization packages
I don't know what software you have access to, but here are a few tool-related tips to help you in your tool survey and identification:
Abaqus has a Python API (http://www.maths.cam.ac.uk/computing/software/abaqus_docs/docs/v6.12/pdf_books/SCRIPT_USER.pdf)
If you need to use a program that does not have an API, you can automate the GUI using Python's win32com or Pywinauto (GUI automation) package
For FEM/FEA, I used both MSC PATRAN and MSC NASTRAN on previous projects since they have command-line interfaces (read: easy to interface with via Python)
HyperSizer also has a Python API
Install Pythonxy (https://code.google.com/p/pythonxy/) and use the Spyder Python IDE (included)
CATIA can be automated using win32com (quick Google search on how to do it: http://code.activestate.com/recipes/347243-automate-catia-v5-with-python-and-pywin32/)
Note: to give you some sort of development time-frame, what you're asking will probably take at least two weeks to develop.
I hope this helps.

Comparative Analysis of Object Recognition or Tracking Methods with Multiple Kinects

First of all, I have looked into the list of all branches of StackExchange, and this seems to be the one best suited for this question.
I am looking for a comparative analysis (can be both theoretical and implementation-oriented) between various popular methods of object recognition and tracking using a Microsoft Kinect 360. These methods do not have to include specialised Kinect API features like hand gesture recognition or skeleton tracking. Could you point me to some technical literature that tackles this subject? I have found quite a few papers that talk about doing object detection and training based on the PCL library. I have also found some papers on using just the RGB image of the Kinect to do classic object tracking. But to put things into perspective, I want to know which method gives good performance and is less challenging to implement when a set of Kinects (possibly with overlapping projection cones) is used to do object recognition and/or tracking.
In my opinion, building a unified point cloud to analyse and label the objects to recognise/classify them using multiple Kinects (assuming I have multiple BUSes) would have too high processing overhead. Would it be a viable alternative to do foreground extraction on the depth images separately and then somehow identify duplicates?