I have two raster layers with different resolutions that I want to join see this image. One has higher resolution (transparent yellow) and the other raster has lower resolution but bigger extent (whole earth) and has information about different classes (drawn in different colors here). The resulting raster should have the higher resolution and extent of the raster drawn in yellow here but should be joined with the other raster, e.g. containing the information of what class it was laying within.
Really appreciate any help!
Cheers
You should to use mosaic rasters. Use Mosaic To New Raster tool.
But before rasters should have equal standart.
Use Resample tool for change resolution.
And your raster value system should be same scale. Arcmap will mosaic them anyway. But it can be wrong.
For example LST rasters. Suppose the raster is on a scale of 1 Fahrenheit and the raster is on a scale of 2 degrees Celsius.
In this case, even if the tools run and generate a new raster layer, the values will be incorrect.
I hope this answer was helpful.
Good luck with.
Related
When masking a bitmap by reducing the polygons it's displayed on, what is the name of doing this?
I often see this done for physics, sort of, to have an edge around an object for detection etc, but a long while ago remember seeing a similar approach being used with polygons for visual masking, but have completely forgotten what it was called or how to find it.
ADDITIONAL INFO:
In this example, the polygons are used to "mask" some of the image:
http://fancyratstudios.com/2010/02/programming/progresstimer-for-cocos2d/
What's the name of using polygons to do masking in this manner?
According to the definition you gave, the technique you're referring to could be Gouraud shading, whose definition is also reported in the Video-Based Rendering book on page 50:
Hole-filling: for polygons that are not depicted in any image,
determine appropriate vertex colors; during rendering, Gouraud shading
is used to mask the missing texture information.
Not sure what your talking about, but it's either rasterization (projecting polygons to a plane), or "apply a texture" (projecting an image to a set of polygons), or decimation (reducing the amount of polygons)
I would like to to plot 2D vector field in a single picture using the Hue & brightness method, i.e., Hue to direction (or say, phase), brightness to magnitude.
Such method is often used to visualize e.g., magnetic domains, vortex etc which are reconstructed from Lorenz microscopy.
As input, I have two images of size 1024*1024, pixels contain the magnitude of X and Y component of the vector field.
Since DM does not support native HSL color scheme, I think one should first uses a group of self defined functions to convert HSL to RGB...
You can only use RGB images in DigitalMicrograph, so you will have to do the conversion from HSB to RGB in your script code, and then create the according RGB image.
Luckily, there is a demonstration script on the Gatan script resources webpage which does exactly that! You can basically use the script as it is shown there.
Gatan Script Resources
Link to script-file:
Display as HSB
Note, the script uses complex images as input - just as a convenient container to combine two images into a single one. The test function demonstrates this though.
I'm attempting to calculate vertex normals for various game assets. The normals I calculate are used for "inflating" the model (to draw behind the real model producing a thick outline).
I currently compute the normal for each face and average all of them (several other questions on Stack Overflow suggest this approach). However, this doesn't work for sharp corners like this one (adjacent faces' normals marked in orange, the normal I'm trying to calculate is outlined in green).
The object looks like a small pedestal and we're looking at the front-left corner. There are three adjoining faces (the bottom face isn't visible; its normal points straight down).
Blender computes an excellent normal that lies squarely in the middle of the three faces' normals; it seems like it somehow calculates a normal that has minimum rotation to each of the three face normals. Blender's normal also doesn't change when the quads are triangulated differently.
Averaging the faces' normals gives me a different normal that points slightly upward in the Z-axis (-0.45, -0.89, +0.08). Inflating my model this way doesn't produce a good outline because the bottom face of the outline is shifted up and doesn't enclose the original model.
I attempted to look at the Blender source code but couldn't find what I was looking for. If anyone can point me to the algorithm in the Blender source, I'd accept that also.
Weight the surface normals by the angle of the faces where they join. It is a common practice in surface rendering (see discussion here: http://www.bytehazard.com/code/vertnorm.html), and will ensure that your bottom face is weighted stronger than the two slanted side faces. I don't know if Blender does it differently, but you should give it a try.
I'm searching for a methods of text recognition based on document borders.
Or the methods that can solve the problem of finding new viewpoint.
For exmp. the camera is in point (x1,y1,z1) and the result picture with perspective distortions, but we can find (x2,y2,z2) for camera to correct picture.
Thanks.
The usual approach, which assumes that the document's page is approximately flat in 3D space, is to warp the quadrangle encompassing the page into a rectangle. To do so you must estimate a homography, i.e. a (linear) projective transformation between the original image and its warped counterpart.
The estimation requires matching points (or lines) between the two images, and a common choice for documents is to map the page corners in the original images to the image corners of the warped image. This will in general produce a rectangle with an incorrect aspect ratio (i.e. the warped page will look "wider" or "taller" than the real one), but this can be easily corrected if you happen to know in advance what the real aspect ratio is (for example, because you know the type of paper used, whether letter, A4, etc.).
A simple algorithm to perform the estimation is the so-called Direct Linear Transformation.
The OpenCV library contains routines to help accomplishing all these tasks, look into it.
I'm using a 3d engine and need to translate between 3d world space and 2d screen space using perspective projection, so I can place 2d text labels on items in 3d space.
I've seen a few posts of various answers to this problem but they seem to use components I don't have.
I have a Camera object, and can only set it's current position and lookat position, it cannot roll. The camera is moving along a path and certain target object may appear in it's view then disappear.
I have only the following values
lookat position
position
vertical FOV
Z far
Z near
and obviously the position of the target object.
Can anyone please give me an algorithm that will do this using just these components?
Many thanks.
all graphics engines use matrices to transform between different coordinats systems. Indeed OpenGL and DirectX uses them, because they are the standard way.
Cameras usually construct the matrices using the parameters you have:
view matrix (transform the world to position in a way you look at it from the camera position), it uses lookat position and camera position (also the up vector which usually is 0,1,0)
projection matrix (transforms from 3D coordinates to 2D Coordinates), it uses the fov, near, far and aspect.
You could find information of how to construct the matrices in internet searching for the opengl functions that create them:
gluLookat creates a viewmatrix
gluPerspective: creates the projection matrix
But I cant imagine an engine that doesnt allow you to get these matrices, because I can ensure you they are somewhere, the engine is using it.
Once you have those matrices, you multiply them, to get the viewprojeciton matrix. This matrix transform from World coordinates to Screen Coordinates. So just multiply the matrix with the position you want to know (in vector 4 format, being the 4ยบ component 1.0).
But wait, the result will be in homogeneous coordinates, you need to divide X,Y,Z of the resulting vector by W, and then you have the position in Normalized screen coordinates (0 means the center, 1 means right, -1 means left, etc).
From here it is easy to transform multiplying by width and height.
I have some slides explaining all this here: https://docs.google.com/presentation/d/13crrSCPonJcxAjGaS5HJOat3MpE0lmEtqxeVr4tVLDs/present?slide=id.i0
Good luck :)
P.S: when you work with 3D it is really important to understand the three matrices (model, view and projection), otherwise you will stumble every time.
so I can place 2d text labels on items
in 3d space
Have you looked up "billboard" techniques? Sometimes just knowing the right term to search under is all you need. This refers to polygons (typically rectangles) that always face the camera, regardless of camera position or orientation.