i need to know the difference between translational mass and rotational mass - physics

Reading the results of numerical modal analysis using ANSYS workbench, there were two types of masses , translational mass and rotational mass, I wonder if any body knows the physical difference between them ?

The global mass matrix can be partitioned in a translational and a rotational mass matrix. The rotational mass matrix contains the mass moments of inertia.

Related

Forward and backward movement detection with IMU

We have an embedded device mounted in a vehicle. It has accelerometer, gyrosopce and GPS sensors on board. The goal is to distinguish when vehicle is moving forward and backward (in reverse gear). Sensor's axis are aligned with vehicle's axis.
Here's our observations:
It's not enough to check direction of acceleration, because going backwards and braking while moving forward would show results in the same direction.
We could say that if GPS speed decreased 70 -> 60 km/h it was a braking event. But it becomes tricky when speed is < 20 km/h. Decrease 20 -> 10 km/h is possible when going both directions.
We can't rely on GPS angle at low speeds.
How could we approach this problem? Any ideas, articles or researches would be helpful.
You are looking for Attitude and heading reference system implementation. Here's an open source code library. It works by fusing the two data sources (IMU and GPS) to determine the location and the heading.
AHRS provides you with roll, pitch and yaw which are the angles around X, Y and Z axises of the IMU device.
There are different algorithms to do that. Examples of AHRS algorithms are Madgwick and Mahony algorithms. They will provide you with quaternion and Eurler angles which can easily help you identify the orientation of the vehicle at any time.
This is a cool video of AHRS algo running in real time.
Similar question is here.
EDIT
Without Mag data, you won't get high accuracy and your drift will increase over time.
Still, you can perform AHRS on 6DoF (Acc XYZ and Gyr XYZ) using Madgwick algorithm. You can find an implementation here. If you want to dive into the theory of things, have a look at Madgwick's internal report.
Kalman Filter could be an option to merge your IMU 6DoF with GPS data which could dramatically reduce the drift over time. But that requires a good understanding of Kalman Filters and probably custom implementation.

how to reconstruct scene from different views' point clouds

I am facing a problem on 3D reconstruction since I am a new to this filed. I have some different views' depth map(point clouds), I want to use them to reconstruct the scene to get the effect like using the kinect fusion. Is there any paper of source code to settle this problem. Or any ideas on this problem.
PS:the point cloud is stored as a file with (x,y,z), you can check here to get the data.
Thank you very much.
As you have stated that you are new to this field, I shall attempt to keep this high level. Please do comment if there is something that is not clear.
The pipeline you refer to has three key stages:
Integration
Rendering
Pose Estimation
The Integration stage takes the unprojected points from a Depth Map (Kinect image) under the current pose and "integrates" them into a spatial data structure (a Voxel Volume such as a Signed Distance Function or a hierarchical structure like an Octree), often by maintaining per Voxel running averages.
The Rendering stage takes the inverse pose for the current frame and produces an image of the visible parts of the model currently in view. For the common volumetric representations this is achieved by Raycasting. The output of this stage provides the points of the model to which the next live frame is registered (the next stage).
The Pose Estimation stage registers the previously extracted model points to those of the live frame. This is commonly achieved by the Iterative Closest Point algorithm.
With regards to pertinent literature, I would advise the following papers as a starting point.
KinectFusion: Real-Time Dense Surface Mapping and Tracking
Real-time 3D Reconstruction at Scale using Voxel Hashing
Very High Frame Rate Volumetric Integration of Depth Images on
Mobile Devices

Distance estimation based on signal strength

I have set of data which includes position of a car and unknown emitter signal level. I have to estimate the distance based on this. Basically signal levels varies inversely to the square of distance. But when we include stuff like multipath,reflections etc we need to use a diff equation. Here come the Hata Okumura Model which can give us the path loss based on distance. However , the distance is unknown as I dont know where the emitter is. I only have access to different lat/long sets and the received signal level.
What I am asking is could you guys please guide me to techniques which would help me estimate the distance based on current pos and signal strength.All I am asking for is guidance towards a technique which might be useful.
I have looked into How to calculate distance from Wifi router using Signal Strength? but he has 3 fixed wifi signals and can use the FSPL. However in an urban environment it doesnot work.
Since the car is moving, using any diffraction model would be very difficult. The multipath environment is constantly changing due to moving car, and any reflection/diffraction model requires well-known object geometry around the car. In your problem you have moving car position time series [x(t),y(t)] which is known. You also have a time series of rough measurement of the distance between the car and the emitter [r(t)] of unknown position. You need to solve the stationary unknown emitter position (X,Y). So you have many noisy measurement with two unknown parameters to estimate. This is a classic Least Square Estimation problem. You can formulate r(ti) = sqrt((x(ti)-X)^2 + (y(ti)-Y)^2) and feed your data into this equation and do least square estimation. The data obviously is noisy due to multipath but the emitter is stationary and with overtime and during estimation process, the noise can be more or less smooth out.
Least Square Estimation

Paraview. Volume fraction and/or mass flow rate

My goal is to achieve something that was previously asked in this site (outside from SO). In this external site the questions is unanswered, and in order to give more visibility and to try to get an answer I translate it to here:
The issue is:
I have a small simulation of particles flowing through a wire mesh structure, and I'm interested in calculating the mass flow rate and volume fraction of particles at certain cross sections. I think I understand how to calculate mass flow rate by setting up small regions and dumping particle count and velocity from that region. I assume that volume fraction works in a similar fashion, except I only need to know the size of my particles and my dump region.
What I'm wondering is this - is it possible to do these things in Paraview? I can set up planes and slices and such, but I can't seem to extract much useful information out of them.
Further on down the road, what I would like to do would be to plot contours of volume fraction at certain planes, and plot the volume fraction along the vertical axis so I can see how high the particles are piling up on top of the screen, based on particle size, wire size, etc. Can Paraview do any of this?
This is a visualization issue. I don't know how make it with Paraview. The idea is count how much particles cross the slice.
My first approach was piped: DataReader | Spherical Glyph | Slice with normal fixed handly along z axis but nothing results. Also I tried to adding the filter Surface Flow and nothing too. Probably I am piping the data in a bad way.
To see the pipelining process I add an image (focus in PlotOverLine1 and its above pipes):

Fundamental matrix to be computed or known apriori, for real world applications

If you are to design a real world application of a stereo vision algorithm, lets say for a UAV or a spacecraft which is computing elevation maps from the two images, is the fundamental matrix known a priori or will I have to calculate it alongside with the disparity map?
If the fundamental matrix can be obtained apriori, is it correct that knowledge of the calibration matrix and the projective matrices is sufficient to compute the matrix?
Regarding your first question:
In my experience, this depends on the mechanical design of your camera system, and on the use of a fixed focal length. If you are able to mount your cameras rigidly, and if your focal focal length does not change, then you can pre-calibrate the whole thing.
If the relative position of your cameras is likely to change (as they are mounted, for example, on a not perfectly rigid structure), or if you are zooming or using autofocus (!), then you must think about dynamic calibration (or about better fixing your cameras). The depth error induced by calibration error depends on the baseline of your stereo setup and the distance to your scene, so you can compute your tolerances.
Regarding your second question:
Yes, it is sufficient.
Your should be aware that there are many ways of computing an F-matrix. I highly recommend to look into Hearley & Zisserman, which is the de-facto reference for these topics.