Robot odometry in labview - labview

I am currently working on a (school-)project involving a robot having to navigate a corn field.
We need to make the complete software in NI Labview.
Because of the tasks the robot has to be able to perform the robot has to know it's position.
As sensors we have a 6-DOF IMU, some unrealiable wheel encoders and a 2D laser scanner (SICK TIM351).
Until now I am unable to figure out any algorithms or tutorials, and thus really stuck on this problem.
I am wondering if anyone ever attempted in making SLAM work in labview, and if so are there any examples or explanations to do this?
Or is there perhaps a toolkit for LabVIEW that contains this function/algorithm?
Kind regards,
Jesse Bax
3rd year mechatronic student

As Slavo mentioned, there's the LabVIEW Robotics module that contains algorithms like A* for pathfinding. But there's not very much there that can help you solve the SLAM problem, that I am aware of. The SLAM problem consist of the following parts: Landmark extraction, data association, state estimation and updating of state.
For landmark extraction, you have to pick one or multiple features that you want the robot to recognize. This can for example be a corner or a line(wall in 3D). You can for example use clustering, split and merge or the RANSAC algorithm. I believe your laser scanner extract and store the points in a list sorted by angle, this makes the Split and Merge algorithm very feasible. Although RANSAC is the most accurate of them, but also has a higher complexity. I recommend starting with some optimal data points for testing the line extraction. You can for example put your laser scanner in a small room with straight walls and perform one scan and save it to an array or a file. Make sure the contour is a bit more complex than just four walls. And remove noise either before or after measurement.
I haven't read up on good methods for data association, but you could for example just consider a landmark new if it is a certain distance away from any existing landmarks or update an old landmark if not.
State estimation and updating of state can be achieved with the complementary filter or the Extended Kalman Filter (EKF). EKF is the de facto for nonlinear state estimation [1] and tend to work very well in practice. The theory behind EKF is quite though, but it should be a tad easier to implement. I would recommend using the MathScript module if you are going to program EKF. The point of these two filters are to estimate the position of the robot from the wheel encoders and landmarks extracted from the laser scanner.
As the SLAM problem is a big task, I would recommend program it in multiple smaller SubVI's. So that you can properly test your parts without too much added complexity.
There's also a lot of good papers on SLAM.
http://www.cs.berkeley.edu/~pabbeel/cs287-fa09/readings/Durrant-Whyte_Bailey_SLAM-tutorial-I.pdf
http://ocw.mit.edu/courses/aeronautics-and-astronautics/16-412j-cognitive-robotics-spring-2005/projects/1aslam_blas_repo.pdf
The book "Probabalistic Robotics".
https://wiki.csem.flinders.edu.au/pub/CSEMThesisProjects/ProjectSmit0949/Thesis.pdf

LabVIEW provides LabVIEW Robotics module. There are also plenty of templates for robotics module. Firstly you can check the Starter Kit 2.0 template Which will provide you simple working self driving robot project. You can base on such template and develop your own application from working model, not from scratch.

Related

Looking for a way to capture elevation and location data from a device to create a topographical map or model

I'm in the process of buying a 7.5 acre plot of land in a wooded, hilly area. I would estimate that the elevation varies about 50 feet from the bottom of the creek to the top of the hill. I would like to find a good method for measuring the topography of the land so I can create a 3D model. It would be tremendously useful to be able to try out different land development ideas and to simulate locations for future buildings.
My low-tech version of doing this would be to set up a laser level and go around taking elevation measurements in a 3' or so grid pattern. As I thought about that, I realized that smartphones and similar devices have quite a few sensors built in that might make this a lot easier.
I learned about software that will use a drone to capture data and images to automatically generate a topo map and 3D model. Drone Deploy is one such tool. I do have a DJI Phantom 4, but I don't know if it's feasible to fly such an intricate path among trees to scan the entire property. I wonder if there's another way to use this amazing modern hardware (phone or drone) to make my task easy.
I would appreciate hearing any thoughts and ideas about this!
The thing with dronedeploy is that you fly above the trees usually 30meters is ok. In a cross pattern.
Why do you want to fly between the trees? You have to explain that first.

How can i create a 3D modeling app? What resources i will required?

I want to create a application which converts 2d-images/video into a 3d model. While researching on it i found out similar application like Trnio, Scann3D, Qlone,and few others(Though few of them provide poor output 3D model). I also find out about a technology launched by the microsoft research called mobileFusion which showed the same vision i was hoping for my application but these apps were non like that.
Creating a 3D modelling app is complex task, and achieving it to a high standard requires a lot of studying. To point you in the right direction, you most likely want to perform something called Structure-from-Motion(SfM) or Simultaneous Localization and Mapping (SLAM).
If you want to program this yourself OpenCV is a good place to start if you know C++ or Python. A typical pipeline involves; feature extraction and matching, camera pose estimation, triangulation and then optimised using a bundle adjustment. All pipelines for SfM and SLAM follow these general steps (with exceptions of course). All of these steps are possible is OpenCV although Googles Ceres Solver is an excellent open-source bundle adjustment. SfM generally goes onto dense matching which is where you get very dense point clouds which are good for creating meshes. A free open-source pipeline for this is OpenSfM. Another good source for tools is OpenMVG which has all of the tools you need to make a full pipeline.
SLAM is similar to SfM, however, has more of a focus on real-time application and less on absolute accuracy. Applications for this is more centred around robotics where a robot wants to know where it is relative to its environment, but it not so concerned on absolute accuracy. The top SLAM algorithms are ORB-SLAM and LSD-SLAM. Both are open-source and free for you to implement into your own software.
So really it depends what you want... SfM for high accuracy, SLAM for real-time. If you want a good 3D model I would recommend using existing algorithms as they are very good.
The best commercial software in my opinion... Agisoft Photoscan. If you can make anything half as good as this i'd be very impressed. To answer your question what resources will you require. In my opinion, python/c++ skills, the ability to google well and a spare time to read up on photogrammetry and SfM properly.

Insert skeleton in 3D model programmatically

Background
I'm working on a project where a user gets scanned by a Kinect (v2). The result will be a generated 3D model which is suitable for use in games.
The scanning aspect is going quite well, and I've generated some good user models.
Example:
Note: This is just an early test model. It still needs to be cleaned up, and the stance needs to change to properly read skeletal data.
Problem
The problem I'm currently facing is that I'm unsure how to place skeletal data inside the generated 3D model. I can't seem to find a program that will let me insert the skeleton in the 3D model programmatically. I'd like to do this either via a program that I can control programmatically, or adjust the 3D model file in such a way that skeletal data gets included within the file.
What have I tried
I've been looking around for similar questions on Google and StackOverflow, but they usually refer to either motion capture or skeletal animation. I know Maya has the option to insert skeletons in 3D models, but as far as I could find that is always done by hand. Maybe there is a more technical term for the problem I'm trying to solve, but I don't know it.
I do have a train of thought on how to achieve the skeleton insertion. I imagine it to go like this:
Scan the user and generate a 3D model with Kinect;
1.2. Clean user model, getting rid of any deformations or unnecessary information. Close holes that are left in the clean up process.
Scan user skeletal data using the Kinect.
2.2. Extract the skeleton data.
2.3. Get joint locations and store as xyz-coordinates for 3D space. Store bone length and directions.
Read 3D skeleton data in a program that can create skeletons.
Save the new model with inserted skeleton.
Question
Can anyone recommend (I know, this is perhaps "opinion based") a program to read the skeletal data and insert it in to a 3D model? Is it possible to utilize Maya for this purpose?
Thanks in advance.
Note: I opted to post the question here and not on Graphics Design Stack Exchange (or other Stack Exchange sites) because I feel it's more coding related, and perhaps more useful for people who will search here in the future. Apologies if it's posted on the wrong site.
A tricky part of your question is what you mean by "inserting the skeleton". Typically bone data is very separate from your geometry, and stored in different places in your scene graph (with the bone data being hierarchical in nature).
There are file formats you can export to where you might establish some association between your geometry and skeleton, but that's very format-specific as to how you associate the two together (ex: FBX vs. Collada).
Probably the closest thing to "inserting" or, more appropriately, "attaching" a skeleton to a mesh is skinning. There you compute weight assignments, basically determining how much each bone influences a given vertex in your mesh.
This is a tough part to get right (both programmatically and artistically), and depending on your quality needs, is often a semi-automatic solution at best for the highest quality needs (commercial games, films, etc.) with artists laboring over tweaking the resulting weight assignments and/or skeleton.
There are algorithms that get pretty sophisticated in determining these weight assignments ranging from simple heuristics like just assigning weights based on nearest line distance (very crude, and will often fall apart near tricky areas like the pelvis or shoulder) or ones that actually consider the mesh as a solid volume (using voxels or tetrahedral representations) to try to assign weights. Example: http://blog.wolfire.com/2009/11/volumetric-heat-diffusion-skinning/
However, you might be able to get decent results using an algorithm like delta mush which allows you to get a bit sloppy with weight assignments but still get reasonably smooth deformations.
Now if you want to do this externally, pretty much any 3D animation software will do, including free ones like Blender. However, skinning and character animation in general is something that tends to take quite a bit of artistic skill and a lot of patience, so it's worth noting that it's not quite as easy as it might seem to make characters leap and dance and crouch and run and still look good even when you have a skeleton in advance. That weight association from skeleton to geometry is the toughest part. It's often the result of many hours of artists laboring over the deformations to get them to look right in a wide range of poses.

API to break voice into phonemes / synthesize new speech given speech samples?

You know those movies where the tech geeks record someone's voice, and their software breaks it into phonemes? Which they can then use to type in any phrase, and make it seem as if the target is saying it?
Does that software exist in an API Version? I don't even know what to Google.
There is no such software. Breaking arbitrary speech into its constituent phonemes is only a partially solved problem: speech-to-text software is still imperfect, as is text-to-speech.
The idea is to reproduce the timbre of the target's voice. Even if you were able to segment the audio perfectly, reordering the phonemes would produce audio with unnatural cadence and intonation, not to mention splicing artifacts. At that point you're getting into smoothing, time-scaling, and pitch correction, all of which are possible and well-understood in theory, but operate poorly on real-world data, especially when the audio sample in question is as short as a single phoneme, and further when the timbre needs to be preserved.
These problems are compounded on the phonetic side by allophonic variation in sounds based on accent and surrounding phonemes; in order to faithfully produce even a low-quality approximation of the audio, you'd need a detailed understanding of the target's language, accent, and speech patterns.
Furthermore, your ultimate problem is one of social engineering, and people are not easy to fool when it comes to the voices of people they know. Even with a large corpus of input data, at best you could get a short low-quality sample, hardly enough for a conversation.
So while it's certainly possible, it's difficult; even if it existed, it wouldn't always be good enough.
SRI International (the company that created Siri for iOS) has an SDK called EduSpeak, which will take audio input and break it down into individual phonemes. I know this because I sat through a demo of the product about a week ago. During the demo, the presenter showed us an application that was created using the SDK. The application gave a few lines of text for the presenter to read. After reading the text, the application displayed a bar chart where each bar represented a phoneme from his speech. The height of each bar represented a score of how well each phoneme was pronounced (the presenter was not a native English speaker, so he received lower scores on certain phonemes compared to others). The presenter could also click on each individual bar to have only that individual phoneme played back using the original audio.
So yes, software exists that divides audio up by phoneme, and it does a very good job of it. Now, whether or not those phonemes can be re-assembled into speech is an open question. If we end up getting a trial version of the SDK, I'll try it out and let you know.
If your aim is to mimic someone else's voice, then another attitude is to convert your own voice (instead of assembling phonemes). It is (surprisingly) called voice conversion, e.g http://www.busim.ee.boun.edu.tr/~speech/projects/Voice_Conversion.htm
The technology is called "voice synthesis" and "voice recognition"
The java API for this can be found here Java voice JSAPI
Apple has an API for this Apple speech
Microsoft has several ...one is discussed here Vista speech
Lyrebird is a start-up that is working on this very problem. Given samples of a person's voice and some written text, it can synthesize a spoken version of that written text in the voice of the person in the samples.
You can get interesting voice warping effects with a formant-aware pitch shift. Adobe Audition has a pretty good implementation. Antares produces some interesting vocal effects VST plugins.
These techniques use some form of linear predictive coding (LPC) to treat the voice as a source-filter model. LPC works on speech signals by estimating the resonance of the vocal tract (formant), reversing its effect with an inverse filter, and then coding the resulting residual signal. The residual signal is ideally an impulse train that represents the glottal impulse. This allows the scaling of pitch and formants independently, which leads to a much better gender conversion result than simple pitch shifting.
I dunno about a commercially available solution, but the concept isn't entirely out of the range of possibility. For example, the University of Delaware has fairly decent software for doing just that.
http://www.modeltalker.com

How to make a 2D Soft-body physics engine?

The definition of rigid body in Box2d is
A chunk of matter that is so strong
that the distance between any two bits
of matter on the chunk is completely
constant.
And this is exactly what i don't want as i would like to make 2D (maybe 3D eventually), elastic, deformable, breakable, and even sticky bodies.
What I'm hoping to get out of this community are resources that teach me the math behind how objects bend, break and interact. I don't care about the molecular or chemical properties of these objects, and often this is all I find when I try to search for how to calculate what a piece of wood, metal, rubber, goo, liquid, organic material, etc. might look like after a force is applied to it.
Also, I'm a very visual person, so diagrams and such are EXTREMELY HELPFUL for me.
================================================================================
Ignore these questions, they're old, and I'm only keeping them here for contextual purposes
1.Are there any simple 2D soft-body physics engines out there like this?
preferably free or opensource?
2.If not would it be possible to make my own without spending years on it?
3.Could i use existing engines like bullet and box2d as a start and simply transform their code, or would this just lead to more problems later, considering my 1 year of programming experience and bullet being 3D?
4.Finally, if i were to transform another library, would it be the best change box2D's already 2d code, Bullet's already soft code, or mixing both's source code?
Thanks!
(1) Both Bullet and PhysX have support for deformable objects in some capacity. Bullet is open source and PhysX is free to use. They both have ports for windows, mac, linux and all the major consoles.
(2) You could hack something together if you really know what you are doing, and it might even work. However, there will probably be bugs unless you have a damn good understanding of how Box2D's sequential impulse constraint solver works and what types of measures are going to be necessary to keep your system stable. That said, there are many ways to get deformable objects working with minimal fuss within a game-like environment. The first option is to take a second (or higher) order approximation of the deformation. This lets you deal with deformations in much the same way as you deal with rigid motions, only now you have a few extra degrees of freedom. See for example the following paper:
http://www.matthiasmueller.info/publications/MeshlessDeformations_SIG05.pdf
A second method is pressure soft bodies, which basically model the body as a set of particles with some distance constraints and pressure forces. This is what both PhysX and Bullet do, and it is a pretty standard technique by now (for example, Gish used it):
http://citeseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.4.2828%26rep%3Drep1%26type%3Dpdf
If you google around, you can find lots of tutorials on implementing it, but I can't vouch for their quality. Finally, there has been a more recent push to trying to do deformable objects the `right' way using realistic elastic models and finite element type approaches. This is still an area of active research, so it is not for the faint of heart. For example, you could look at any number of the papers in this year's SIGGRAPH proceedings:
http://kesen.realtimerendering.com/sig2011.html
(3) Probably not. Though there are certain 2D style games that can work with a 3D physics engine (for example top down type games) for special effects.
(4) Based on what I just said, you should probably know the answer by now. If you are the adventurous sort and got some time to kill and the will to learn, then I say go for it! Of course it will be hard at first, but like anything it gets easier over time. Plus, learning new stuff is lots of fun!
On the other hand, if you just want results now, then don't do it. It will take a lot of time, and you will probably fail (a lot). If you just want to make games, then stick to the existing libraries and build on whatever abstractions it provides you.
Quick and partial answer:
rigid body are easy to model due to their property (you can use physic tools, like "Torseur+ (link on french on wikipedia, english equivalent points to screw theory) to modelate forces applying at any point in your element.
in comparison, non-solid elements move from almost solid (think very hard rubber : it can move but is almost solid) to almost liquid (think very soft ruber, latex). Meaning that dynamical properties applying to that knd of objects are much complex and depend of the nature of the object
Take the example of a spring : it's easy to model independantly (f=k.x), but creating a generic tool able to model that specific case is a nightmare (especially if you think of corner cases : extension is not infinite, compression reaches a lower point, material is non linear...)
as far as I know, when dealing with "elastic" materials, people do their own modelisation for their own purpose (a generic one does not exist)
now the answers:
Probably not, not that I know at least
not easily, see previously why
Unless you got high level background in elastic materials, I fear it's gonna be painful
Hope this helped
Some specific cases such as deformable balls can be simulated pretty well using spring-joint bodies:
Here is an implementation example with cocos2d: http://2sa-studio.blogspot.com/2014/05/soft-bodies-with-cocos2d-v3.html
Depending on the complexity of the deformable objects that you need, you might be able to emulate them using box2d, constraining rigid bodies with joints or springs. I did it in the past using a box2d clone for xna (farseer) and it worked nicely. Hope this helps.
The physics of your question breaks down into two different topics:
Inelastic Collisions: The math here is easy, and you could write a pretty decent library yourself without too much work for 2D points/balls. (And with more work, you could learn the physics for extended bodies.)
Materials Bending and Breaking: This will be hard. In general, you will have to model many of the topics in Mechanical Engineering:
Continuum Mechanics
Structural Analysis
Failure Analysis
Stress Analysis
Strain Analysis
I am not being glib. Modeling the bending and breaking of materials is, in general, a very detailed and varied topic. It will take a long time. And the only way to succeed will be to understand the science well enough that you can make clever shortcuts in limiting the scope of the science you need to model in your game.
However, the other half of your problem (modeling Inelastic Collisions) is a much more achievable goal.
Good luck!