every one.
I'm a android developer.
I want to scale my image from center of displayed part of image with matrix.
So, I scaled my image with matrix. And then moved it with the calculated pointer.
But, the Application not work correctly.
This can't find the correct center, so when it does, it moved right.
Why this is?
I can't find the problem.
The code followed.
matrix.reset();
curScale += 0.02f;
orgImage.getHeight();
w = orgImage.getWidth();
matrix.postScale(curScale, curScale);
rtnBitmap = Bitmap.createBitmap(orgImage, 0, 0, w, h, matrix, true);
curImageView.setImageBitmap(rtnBitmap);
Matrix curZoomOutMatrix = new Matrix();
pointerx =(int ) ((mDisplayWidth/2 - curPosX) * curScale);
curPosX = - pointerx;
pointery =(int ) ((mDisplayWidth/2 - curPosY) * curScale);
curPosY = - pointery;
Log.i("ZoomOut-> posX = ", Integer.toString(curPosX));
Log.i("ZoomOut-> posY = ", Integer.toString(curPosY));
curZoomOutMatrix.postTranslate(curPosX, curPosY);
curImageView.setImageMatrix(curZoomOutMatrix);
curImageView.invalidate();
Did you have any sample code for center zoomIn and zoomOut the imageView with matrix?
Who can explain for that?
Please help me.
Or, It's my fault.
First, I scale the image from original one.
So, The image is (width, height) * scale;
Then I calculate the absolute position of the point that is displayed center. And then, move my ImageView to the calculated position from which the view is. My fault are here.
When i calculate view position, I change the position from now scale.
So, when it scaled, the position is not <original position> * <now scale>. It was <original position * <scale> * <now scale>, the result was strange position.
So i remade add to calculate the center position from original one.
That mode is now following.
public void calculate(float offset) {
float tmpScale = curScale - offset;
float orgWidth = (mDisplayWidth / 2 - curPosX) / tmpScale;
float orgHeight = (mDisplayHeight / 2 - curPosY) / tmpScale;
int tmpPosX = (int)(mDisplayWidth / 2 - orgWidth * curScale);
int tmpPosY = (int)(mDisplayHeight / 2 - orgHeight * curScale);
curPosX = tmpPosX;
curPosY = tmpPosY;
Matrix matrix = new Matrix();
matrix.postTranslate(tmpPosX, tmpPosY);
curImageView.setImageMatrix(matrix);
curImageView.invalidate();
}
Thank you. every one.
Related
I have a problem with the rotation within a JSVGCanvas.
The user can load SVG images into a document page (JLayeredPane).
Every image displays in his own JSVGCanvas and has his own class.
The user can resize the image by dragging the endpoints and also move the image around.
The rotation works perfectly
(Basic and rotated situation)
Only the subsequent change of the bounding box of the Canvas results in a scaling, which no longer corresponds to the original size
(Finally after setting Boundingbox)
Code fragment in UpdateManager():
....
// calculating new canvas bounding box
AffineTransform at = AffineTransform.getRotateInstance(rotation*Math.PI/180.0,
originalX + originalW/2.0,
originalY + originalH/2.0);
Rectangle2D.Double rect = new Rectangle2D.Double(originalX, originalY, originalW, originalH);
Shape s = at.createTransformedShape(rect);
xx = s.getBounds2D().getX();
yy = s.getBounds2D().getY();
ww = s.getBounds2D().getWidth();
hh = s.getBounds2D().getHeight();
canvas.setRenderingTransform(AffineTransform.getRotateInstance(rotation*Math.PI/180.0,
originalW/2.0, originalH/2.0));
// this.setBounds() will do the stuff and also change the Canvas Bounds
setBounds(xx, yy, ww, hh);
....
I am grateful for any help.
Solved...
I have found the way to solve this problem:
....
// calculating new canvas bounding box
AffineTransform at = AffineTransform.getRotateInstance(rotation*Math.PI/180.0,
originalX + originalW/2.0,
originalY + originalH/2.0);
Rectangle2D.Double rect = new Rectangle2D.Double(originalX, originalY, originalW, originalH);
Shape s = at.createTransformedShape(rect);
xx = s.getBounds2D().getX();
yy = s.getBounds2D().getY();
ww = s.getBounds2D().getWidth();
hh = s.getBounds2D().getHeight();
double rotScale = originalW/ww;
double diffX = (originalW * rotScale - originalW) / 2.0 * -1.0;
double diffY = (originalH * rotScale - originalH) / 2.0 * -1.0;
AffineTransform af = AffineTransform.getScaleInstance(rotScale, rotScale);
af.preConcatenate(AffineTransform.getTranslateInstance(diffX, diffY));
af.preConcatenate(AffineTransform.getRotateInstance(rotation*Math.PI/180.0, originalW/2.0, originalH/2.0));
canvas.setRenderingTransform(af);
// this.setBounds() will do the stuff and also change the Canvas Bounds
setBounds(xx, yy, ww, hh);
....
Maybe this will help others having the same problem.
Picture solved
After searching through - I wasn't able to find the answer so I'll give it a shot here.
I need to resize desktop video in order to feet it on mobile screen, let's say original width of the video was 1915 and original height was 1075, I calculated aspect ratio:
aspectRatio = (width/height); aspectRatio = 1.78;
Now my mobile screen resolution is: height = 1609, width = 1080.
How can I properly resize my video in order to keep the same aspect ratio??
Thank you
aspectRatio = (width/height)
You always want aspectRatio to be 1.78, if you want to prevent stretching or cropping.
And, the max new height is 1609 and the max new width is 1080, so:
1.78 = (1080/height)
height = 1080/1.78 = 606.74...
OR
1.78 = (width/1609)
width = 1.78*1609 = 2864.02...
So, you can have 1080x606.74 (which fits on screen), or 2864.02x1609 (which doesn't fit).
So, your answer is 1080x606.74...
Let's original dimensions are Wo, Ho, and target screen dimensions are Wt, Ht.
Rectangle with predetermined aspect will fit vertically if ratio of target and initial heights less than ratio of its widths, and will fit horizontally otherwise:
Coeff = Min(Wt/Wo, Ht/Ho)
W_Result = Wo * Coeff
H_Result = Ho * Coeff
or
if (Wt * Ho < Ht * Wo) then
W_Result = Wt
H_Result = Ho * Wt / Wo
else
W_Result = Wo * Ht / Ho
H_Result = Ht
I want to convert GPS location (latitude, longitude) into x,y coordinates.
I found many links about this topic and applied it, but it doesn't give me the correct answer!
I am following these steps to test the answer:
(1) firstly, i take two positions and calculate the distance between them using maps.
(2) then convert the two positions into x,y coordinates.
(3) then again calculate distance between the two points in the x,y coordinates
and see if it give me the same result in point(1) or not.
one of the solution i found the following, but it doesn't give me correct answer!
latitude = Math.PI * latitude / 180;
longitude = Math.PI * longitude / 180;
// adjust position by radians
latitude -= 1.570795765134; // subtract 90 degrees (in radians)
// and switch z and y
xPos = (app.radius) * Math.sin(latitude) * Math.cos(longitude);
zPos = (app.radius) * Math.sin(latitude) * Math.sin(longitude);
yPos = (app.radius) * Math.cos(latitude);
also i tried this link but still not work with me well!
any help how to convert from(latitude, longitude) to (x,y) ?
Thanks,
No exact solution exists
There is no isometric map from the sphere to the plane. When you convert lat/lon coordinates from the sphere to x/y coordinates in the plane, you cannot hope that all lengths will be preserved by this operation. You have to accept some kind of deformation. Many different map projections do exist, which can achieve different compromises between preservations of lengths, angles and areas. For smallish parts of earth's surface, transverse Mercator is quite common. You might have heard about UTM. But there are many more.
The formulas you quote compute x/y/z, i.e. a point in 3D space. But even there you'd not get correct distances automatically. The shortest distance between two points on the surface of the sphere would go through that sphere, whereas distances on the earth are mostly geodesic lengths following the surface. So they will be longer.
Approximation for small areas
If the part of the surface of the earth which you want to draw is relatively small, then you can use a very simple approximation. You can simply use the horizontal axis x to denote longitude λ, the vertical axis y to denote latitude φ. The ratio between these should not be 1:1, though. Instead you should use cos(φ0) as the aspect ratio, where φ0 denotes a latitude close to the center of your map. Furthermore, to convert from angles (measured in radians) to lengths, you multiply by the radius of the earth (which in this model is assumed to be a sphere).
x = r λ cos(φ0)
y = r φ
This is simple equirectangular projection. In most cases, you'll be able to compute cos(φ0) only once, which makes subsequent computations of large numbers of points really cheap.
I want to share with you how I managed the problem. I've used the equirectangular projection just like #MvG said, but this method gives you X and Y positions related to the globe (or the entire map), this means that you get global positions. In my case, I wanted to convert coordinates in a small area (about 500m square), so I related the projection point to another 2 points, getting the global positions and relating to local (on screen) positions, just like this:
First, I choose 2 points (top-left and bottom-right) around the area where I want to project, just like this picture:
Once I have the global reference area in lat and lng, I do the same for screen positions. The objects containing this data are shown below.
//top-left reference point
var p0 = {
scrX: 23.69, // Minimum X position on screen
scrY: -0.5, // Minimum Y position on screen
lat: -22.814895, // Latitude
lng: -47.072892 // Longitude
}
//bottom-right reference point
var p1 = {
scrX: 276, // Maximum X position on screen
scrY: 178.9, // Maximum Y position on screen
lat: -22.816419, // Latitude
lng: -47.070563 // Longitude
}
var radius = 6371; //Earth Radius in Km
//## Now I can calculate the global X and Y for each reference point ##\\
// This function converts lat and lng coordinates to GLOBAL X and Y positions
function latlngToGlobalXY(lat, lng){
//Calculates x based on cos of average of the latitudes
let x = radius*lng*Math.cos((p0.lat + p1.lat)/2);
//Calculates y based on latitude
let y = radius*lat;
return {x: x, y: y}
}
// Calculate global X and Y for top-left reference point
p0.pos = latlngToGlobalXY(p0.lat, p0.lng);
// Calculate global X and Y for bottom-right reference point
p1.pos = latlngToGlobalXY(p1.lat, p1.lng);
/*
* This gives me the X and Y in relation to map for the 2 reference points.
* Now we have the global AND screen areas and then we can relate both for the projection point.
*/
// This function converts lat and lng coordinates to SCREEN X and Y positions
function latlngToScreenXY(lat, lng){
//Calculate global X and Y for projection point
let pos = latlngToGlobalXY(lat, lng);
//Calculate the percentage of Global X position in relation to total global width
pos.perX = ((pos.x-p0.pos.x)/(p1.pos.x - p0.pos.x));
//Calculate the percentage of Global Y position in relation to total global height
pos.perY = ((pos.y-p0.pos.y)/(p1.pos.y - p0.pos.y));
//Returns the screen position based on reference points
return {
x: p0.scrX + (p1.scrX - p0.scrX)*pos.perX,
y: p0.scrY + (p1.scrY - p0.scrY)*pos.perY
}
}
//# The usage is like this #\\
var pos = latlngToScreenXY(-22.815319, -47.071718);
$point = $("#point-to-project");
$point.css("left", pos.x+"em");
$point.css("top", pos.y+"em");
As you can see, I made this in javascript, but the calculations can be translated to any language.
P.S. I'm applying the converted positions to an HTML element whose id is "point-to-project". To use this piece of code on your project, you shall create this element (styled as position absolute) or change the "usage" block.
Since this page shows up on top of google while i searched for this same problem, I would like to provide a more practical answers. The answer by MVG is correct but rather theoratical.
I have made a track plotting app for the fitbit ionic in javascript. The code below is how I tackled the problem.
//LOCATION PROVIDER
index.js
var gpsFix = false;
var circumferenceAtLat = 0;
function locationSuccess(pos){
if(!gpsFix){
gpsFix = true;
circumferenceAtLat = Math.cos(pos.coords.latitude*0.01745329251)*111305;
}
pos.x:Math.round(pos.coords.longitude*circumferenceAtLat),
pos.y:Math.round(pos.coords.latitude*110919),
plotTrack(pos);
}
plotting.js
plotTrack(position){
let x = Math.round((this.segments[i].start.x - this.bounds.minX)*this.scale);
let y = Math.round(this.bounds.maxY - this.segments[i].start.y)*this.scale; //heights needs to be inverted
//redraw?
let redraw = false;
//x or y bounds?
if(position.x>this.bounds.maxX){
this.bounds.maxX = (position.x-this.bounds.minX)*1.1+this.bounds.minX; //increase by 10%
redraw = true;
}
if(position.x<this.bounds.minX){
this.bounds.minX = this.bounds.maxX-(this.bounds.maxX-position.x)*1.1;
redraw = true;
};
if(position.y>this.bounds.maxY){
this.bounds.maxY = (position.y-this.bounds.minY)*1.1+this.bounds.minY; //increase by 10%
redraw = true;
}
if(position.y<this.bounds.minY){
this.bounds.minY = this.bounds.maxY-(this.bounds.maxY-position.y)*1.1;
redraw = true;
}
if(redraw){
reDraw();
}
}
function reDraw(){
let xScale = device.screen.width / (this.bounds.maxX-this.bounds.minX);
let yScale = device.screen.height / (this.bounds.maxY-this.bounds.minY);
if(xScale<yScale) this.scale = xScale;
else this.scale = yScale;
//Loop trough your object to redraw all of them
}
For completeness I like to add my python adaption of #allexrm code which worked really well. Thanks again!
radius = 6371 #Earth Radius in KM
class referencePoint:
def __init__(self, scrX, scrY, lat, lng):
self.scrX = scrX
self.scrY = scrY
self.lat = lat
self.lng = lng
# Calculate global X and Y for top-left reference point
p0 = referencePoint(0, 0, 52.526470, 13.403215)
# Calculate global X and Y for bottom-right reference point
p1 = referencePoint(2244, 2060, 52.525035, 13.405809)
# This function converts lat and lng coordinates to GLOBAL X and Y positions
def latlngToGlobalXY(lat, lng):
# Calculates x based on cos of average of the latitudes
x = radius*lng*math.cos((p0.lat + p1.lat)/2)
# Calculates y based on latitude
y = radius*lat
return {'x': x, 'y': y}
# This function converts lat and lng coordinates to SCREEN X and Y positions
def latlngToScreenXY(lat, lng):
# Calculate global X and Y for projection point
pos = latlngToGlobalXY(lat, lng)
# Calculate the percentage of Global X position in relation to total global width
perX = ((pos['x']-p0.pos['x'])/(p1.pos['x'] - p0.pos['x']))
# Calculate the percentage of Global Y position in relation to total global height
perY = ((pos['y']-p0.pos['y'])/(p1.pos['y'] - p0.pos['y']))
# Returns the screen position based on reference points
return {
'x': p0.scrX + (p1.scrX - p0.scrX)*perX,
'y': p0.scrY + (p1.scrY - p0.scrY)*perY
}
pos = latlngToScreenXY(52.525607, 13.404572);
pos['x] and pos['y] contain the translated x & y coordinates of the lat & lng (52.525607, 13.404572)
I hope this is helpful for anyone looking like me for the proper solution to the problem of translating lat lng into a local reference coordinate system.
Best
Its better to convert to utm coordinates, and treat that as x and y.
import utm
u = utm.from_latlon(12.917091, 77.573586)
The result will be (779260.623156606, 1429369.8665238516, 43, 'P')
The first two can be treated as x,y coordinates, the 43P is the UTM Zone, which can be ignored for small areas (width upto 668 km).
I am using the ZBar SDK for iPhone in order to scan a barcode. I want the reader to scan only a specific rectangle instead of the whole view, for doing that it is needed to set the scanCrop property of the reader to the desired rectangle.
I'm having hard time with understanding the rectangle parameter that has to be set.
Can someone please tell me what rect should I give as an argument if on portrait view its coordinates would be: CGRectMake( A, B, C, D )?
From the zbar's ZBarReaderView Class documentation :
CGRect scanCrop
The region of the video image that will be scanned, in normalized image coordinates. Note that the video image is in landscape mode (default {{0, 0}, {1, 1}})
The coordinates for all of the arguments is in a normalized float, which is from 0 - 1. So, in normalized value, theView.width is 1.0, and theView.height is 1.0. Therefore, the default rect is {{0,0},{1,1}}.
So for example, if I have a transparent UIView named scanView as a scanning region for my readerView. Rather than do :
readerView.scanCrop = scanView.frame;
We should do this, normalizing every arguments first :
CGFloat x,y,width,height;
x = scanView.frame.origin.x / readerView.bounds.size.width;
y = scanView.frame.origin.y / readerView.bounds.size.height;
width = scanView.frame.size.width / readerView.bounds.size.width;
height = scanView.frame.size.height / readerView.bounds.size.height;
readerView.scanCrop = CGRectMake(x, y, width, height);
It works for me. Hope that helps.
You can use scan crop area by doing this.
reader.scanCrop = CGRectMake(x,y,width,height);
for eg.
reader.scanCrop = CGRectMake(.25,0.25,0.5,0.45);
I used this and its working for me.
come on!!! this is the right way to adjust the crop area;
I had wasted tons of time on it;
readerView.scanCrop = [self getScanCrop:cropRect readerViewBounds:contentView.bounds];
- (CGRect)getScanCrop:(CGRect)rect readerViewBounds:(CGRect)rvBounds{
CGFloat x,y,width,height;
x = rect.origin.y / rvBounds.size.height;
y = 1 - (rect.origin.x + rect.size.width) / rvBounds.size.width;
width = rect.size.height / rvBounds.size.height;
height = rect.size.width / rvBounds.size.width;
return CGRectMake(x, y, width, height);
}
uniform sampler2D sampler0;
uniform vec2 tc_offset[9];
void blur()
{
vec4 sample[9];
for(int i = 0; i < 9; ++i)
sample[i] = texture2D(sampler0, gl_TexCoord[0].st + tc_offset[i]);
gl_FragColor = (sample[0] + (2.0 * sample[1]) + sample[2] +
(2.0 * sample[3]) + sample[4] + 2.0 * sample[5] +
sample[6] + 2.0 * sample[7] + sample[8] ) / 13.0;
}
How does the sample[i] = texture2D(sample0, ...) line work?
It seems like to blur an image, I have to first generate the image, yet here, I'm somehow trying to query the very iamge I'm generating. How does this work?
It applies a blur kernel to the image. tc_offset needs to be properly initialized by the application to form a 3x3 area of sampling points around the actual texture coordinate:
0 0 0
0 x 0
0 0 0
(assuming x is the original coordinate). The offset for the upper-left sampling point would be -1/width,-1/height. The offset for the center point needs to be carefully aligned to texel center (the off-by-0.5 problem). Also, the hardware bilinear filter can be used to cheaply increase the amount of blur (by sampling between texels).
The rest of the shader scales the samples by their distance. Usually, this is precomputed as well:
for(int i = 0; i < NUM_SAMPLES; ++i) {
result += texture2D(sampler,texcoord+offsetscaling[i].xy)*offsetscaling[i].z;
}
One way is to generate your original image to render to a texture, not to the screen.
And then you draw a full screen quad using this shader and the texture as it's input to post-process the image.
As you note, in order to make a blurred image, you first need to make an image, and then blur it. This shader does (just) the second step, taking an image that was generated previously and blurring it. There needs to be additional code elsewhere to generate the original non-blurred image.