How can I output the depth information of each frame in blender to a .txt file?
I can generate a depth-map as gray-scale image with values in [0,1] but I want the values in units I use in Blender (meter in my case).
The solution turned out to be very easy. Just link the Z buffer in the Node Editor with the output. Also make sure to turn he Z buffer on.
Related
I would like to calculate the Horizontal and Vertical field of view from the camera intrinsic matrix for the cameras used in the KITTI dataset. The reason I need the Field of view is to convert a depth map into 3D point clouds.
Though this question has been asked quite a long time ago, I felt it needed an answer as I ran into the same issue and was unable to find any info on it.
I have however solved it using the information available in this document and some more general camera calibration documents
Firstly, we need to convert the supplied disparity into distance. This can be done through fist converting the disp map into floats through the method in the dev_kit where they state:
disp(u,v) = ((float)I(u,v))/256.0;
This disparity can then be converted into a distance through the default stereo vision equation:
Depth = Baseline * focal length/ Disparity
Now come some tricky parts. I searched high and low for the focal length and was unable to find it in documentation.
I realised just now when writing that the baseline is documented in the aforementioned source however from section IV.B we can see that it can be found in P(i)rect indirectly.
The P_rects can be found in the calibration files and will be used for both calculating the baseline and the translation from uv in the image to xyz in the real world.
The steps are as follows:
For pixel in depthmap:
xyz_normalised = P_rect \ [u,v,1]
where u and v are the x and y coordinates of the pixel respectively
which will give you a xyz_normalised of shape [x,y,z,0] with z = 1
You can then multiply it with the depth that is given at that pixel to result in a xyz coordinate.
For completeness, as P_rect is the depth map here, you need to use P_3 from the cam_cam calibration txt files to get the baseline (as it contains the baseline between the colour cameras) and the P_2 belongs to the left camera which is used as a reference for occ_0 files.
Im imported my pointcloud to Meshlab with normals and I would like to make a Screened Poisson Surface Reconstruction. When I try to do this I Have a communicat like ' Filters requires correct per vertes normals. E.g.it is necessary that your ALL input vertices have a proper, not-null normal. If you enconuter this error on a triangulated mesh try to use the Remove Unreferenced Vertices filters....'
When I tried use this options all my vertices disappeared. I also checked my normals and all have not-null value.
I don't understand where the problem is. Please help me.
Your input is not a triangulated mesh, so you should not call "Remove Unreferenced Vertices" filter. That filter will remove those vertex that are not in use by any triangle, which mean "every vertex" if you have no triangles.
Assuming your file is in .xyz format, you should have 6 numbers per vertex:
x coord, y coord, z coord, x normal, y normal, z normal
Most likely, your file only contains the coordinate data.
If you cannot add the normal information to the file, you can estimate it in Meshlab with:
Filters > Normals, Curvatures and Orientation > Compute normals for point sets
I am trying to figure out what is the raw data in Kinect V2? ... I know we can convert these raw data to meters and to gray color to represent the depth .... but what is the unit of these raw data ?
and why all the images that captured by Kinect are mirrored?
The raw values stored in the depth image are in millimeters. You can get the X and Y values using the pixel position along with the depth camera intrinsic parameters. If you want I could share a Matlab code that converts the depth image into X,Y,Z values.
Yes, the images are mirrored in Windows-SDK and in the "libfreenect2" which is a open source version of SDK. I couldn't get a solid answer why it is so, but you could look at the discussion available in the link.
There are different kinds of frame which can be captured through Kinect V2. Every raw data captured has a different unit. Like, for depth frame it is millimeters, for color it is RGB (0-255, 0-255, 0-255), for bodyFrames it is 0 or 1 ( having same resolution as depth frame, but can identify up-to a maximum number of human bodies at a time ) and etc.
Ref: https://developer.microsoft.com/en-us/windows/kinect
I've been struggling for some time to find a way in Meshlab to include or transfer UV’s onto a poisson model from source meshes. I will try to explain more of what I’m trying to accomplish below.
My source meshes have uv’s along with texture data. I need to build a fused model and include the texture data. It is for facial expression scan data reconstruction for a production pipeline which ultimately builds a facial rig for animation. Our source scan data includes marker information which we use to register, build a fused scan model which is used to generate a retopologized mesh for blendshapes.
Previously, we were using David3D. http://www.david-3d.com/en/support/downloads
David 3D used poisson surface reconstruction to create a fused model. The fused model it created brought along the uvs and optimized the source textures into 1 uv tile. I'll post a picture of the result below that I'm looking to recreate in MeshLab.
My need to find this solution in meshlab is to build tools to help automate this process. David3D version 5 does not have an development kit to program around.
Is it possible in Meshlab to apply the uvs from the regions used from the source mesh onto the poison model? Could I use a filter to transfer them? Reproject them?
Or is there another reconstruction method/ process from within Meshlab that will keep the uv’s?
Here is an image of what the resulting uv parameter looks like from David. The uvs are white on the left half of the image.
Thank You,David3D UV Layout Result
Dan
No, in MeshLab there is no direct way to transfer UV mapping between two layers.
This is because UV transfer is not, in the general case, a trivial task. It is not simply a matter of assigning to the new surface the "closest" UV of the original mesh: this would not work on UV discontinuities, which are present in the example you linked. Additionally, the two meshes should be almost coincident, otherwise you would also have problems also in defining the "closest" UV.
There are a couple ways to do it, but require manual work and a re-sampling of the texture:
create a UV mapping of the re-meshed model using whatever tool you may have, then resample the existing texture on the new parametrization using "transfer: vertex attributes to Texture (1 or 2 meshes)", using texture color as source
load the original mesh, and using the screenshot function, create "virtual" photos of the model (turn off illumination and do NOT use ortho views), adding them as raster layers, until the model surface has been fully covered. Load the new model, that should be in the same space, and texture-map it using the "parametrization + texturing " using those registered images
In MeshLab it is also possible to create a new texture from the original images, if you have a way to import the registered cameras...
TL;DR: UV coords to color channels → Vertex Attribute Transfer → Color channels back to UV coords
I have had very good results kludging it through the color channels, like this (say you are transfering from layer A to layer B):
Make sure A and B are roughly aligned with eachother (you can use the ICP filter if needed).
Select layer A, then:
Texture → Convert Per Wedge UV to Per Vertex UV (if you've got wedge coords)
Color Creation → Per Vertex Color Function, and transfer the tex coords to the color channels (assuming UV range 0-1, you'll want to tweak these if your range is larger):
func r = 255.0 * vtu
func g = 255.0 * vtv
func b = 0
Sampling → Vertex Attribute Transfer, and use this to transfer the vertex colors (which now hold texture coordinates) from layer A to layer B.
source mesh = layer A
target mesh = layer B
check Transfer Color
set distance large enough to not miss any spots
Now select layer B, which contains the mapped vertex colors, and do the opposite that you did for A:
Texture → Per Vertex Texture Function
func u = r / 255.0
func v = g / 255.0
Texture → Convert Per Vertex UV to Per Wedge UV
And that's it.
The results aren't going to be perfect, but in practice I often find them sufficient. In particular:
If the texture is not continuously mapped to layer A (e.g. maybe you've got patches of image mapped to certain areas, etc.), it's very possible for the attribute transfer to B (especially when upsampling) to have some vertices be interpolated across patch boundaries, which will probably lead to visual artifacts along patch boundaries.
UV coords may be quantized by conversion to a color channel and back. (You could maybe eliminate this by stretching U out over all three color channels, then transferring U, then repeating for V -- never tried it though.)
That said, there's a lot of cases it works in.
I may or may not add images / video to this post another day.
PS Meshlab is pretty straightforward to build from source; it might be possible to add a UV coordinate option to the Vertex Attribute Transfer filter. But, to make it more useful, you'd want to make sure that you didn't interpolate across boundary edges in the mapped UV projection. Definitely a project I'd like to work on some day... in theory. If that ever happens I'll post a link here.
I'm trying to add a monochromatic noise to an image similar to Photoshop version using command line however I can't see any option to achieve it.
I've created a code in JS that does it very well and the logic here is very simple:
Foreach pixel:
Generate random noise pixel
Add or subtract (random) noise pixel to/from original pixel
The create a monochromatic noise add/subtract are on a pixel not channel basis e.g.
Pi - original pixel
Pr - noise pixel
MonoPixel = Pi+Pr or Pi-Pr
Is there any way I can randomly add or subtract pixels via command line ?
Thanks
You can use the ImageMagick +noise command to add noise. To get monochromatic noise, you'll have to do something more complex where you create a separate noise image combined with a base color and composite that with your source image.
This link may be helpful: http://brunogirin.blogspot.com/2009/09/making-noise-with-imagemagick.html
You could try and build your own little shell function. Use $RANDOM (Bash environment variable which returns a random integer in the range 0..32767) and see if it is an odd or an even number. Make odd to mean + and even to mean -.
echo $(($RANDOM % 2))
should return 1 ($RANDOM was odd) or 0 ($RANDOM was even) in random order...