Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
How I can Crop / Clip images in WinRT. I have an image filled completely in a windows 8 window. I need to clip / crop the image from center and i need to display the two image section into two separate grids. How I can do the same through windows 8. Is it possible to implement this without using WritableBitmapEx. If no, how to do the same through WritableBitmapEx.
There are many ways to do it actually, each with some pros and cons.
WriteableBitmapEx seems like a popular solution. I have a similar implementation in WinRT XAML Toolkit. Both are essentially copying blocks of pixels from a full image bitmap. It might not be the fastest way, but if you'd want to get an out of the box solution - it is one that is easy to use. You need to copy the pixels, so you are not optimizing for memory use at the time of the operation and so might run out of memory on very large images quicker. You can recrop easily though end save the results to an image file if you want.
The BitmapDecoder solution Jan recommended is one I often use as it is part of the platform, written in native code and possibly highly optimized and you don't copy the pixels, but if you want to recrop - you'll need to decode the image again.
Xyroid's suggestion with Clip geometry is a quick display-only solution. You don't actually modify the bitmap in memory - you simply display a region of it on the screen. You need then to keep the entire image in memory and if you want to save it - you still need to update the bitmap to save it - by using either one of the first two solutions or maybe use RenderTargetBitmap.Render() if screen resolution is enough for you. It should be very quick though to update the crop region displayed on the screen for quick preview.
Another one is with a Rectangle filled with an ImageBrush where you can apply a Transform and specify the Rectangle size to control cropping. It is fairly similar to the Clip solution only instead of clipping an image and in this case you actually have to use the Tramsform (which you can also do on a Clip - RectangleGeometry). For quick updates - using a Transform might actually be a bit faster than updating the geometry and also supports scaling and rotations.
You can use the Bitmapdecoder and the BitmapTransform classes. This example is very good for cropping. You should also read this tutorial for clipping. Basically you implement a function like this (taken from the example):
async public static Task<ImageSource> GetCroppedBitmapAsync(StorageFile originalImgFile, Point startPoint, Size corpSize, double scale)
{
if (double.IsNaN(scale) || double.IsInfinity(scale))
{
scale = 1;
}
// Convert start point and size to integer.
uint startPointX = (uint)Math.Floor(startPoint.X * scale);
uint startPointY = (uint)Math.Floor(startPoint.Y * scale);
uint height = (uint)Math.Floor(corpSize.Height * scale);
uint width = (uint)Math.Floor(corpSize.Width * scale);
using (IRandomAccessStream stream = await originalImgFile.OpenReadAsync())
{
// Create a decoder from the stream. With the decoder, we can get
// the properties of the image.
BitmapDecoder decoder = await BitmapDecoder.CreateAsync(stream);
// The scaledSize of original image.
uint scaledWidth = (uint)Math.Floor(decoder.PixelWidth * scale);
uint scaledHeight = (uint)Math.Floor(decoder.PixelHeight * scale);
// Refine the start point and the size.
if (startPointX + width > scaledWidth)
{
startPointX = scaledWidth - width;
}
if (startPointY + height > scaledHeight)
{
startPointY = scaledHeight - height;
}
// Create cropping BitmapTransform and define the bounds.
BitmapTransform transform = new BitmapTransform();
BitmapBounds bounds = new BitmapBounds();
bounds.X = startPointX;
bounds.Y = startPointY;
bounds.Height = height;
bounds.Width = width;
transform.Bounds = bounds;
transform.ScaledWidth = scaledWidth;
transform.ScaledHeight = scaledHeight;
// Get the cropped pixels within the bounds of transform.
PixelDataProvider pix = await decoder.GetPixelDataAsync(
BitmapPixelFormat.Bgra8,
BitmapAlphaMode.Straight,
transform,
ExifOrientationMode.IgnoreExifOrientation,
ColorManagementMode.ColorManageToSRgb);
byte[] pixels = pix.DetachPixelData();
// Stream the bytes into a WriteableBitmap
WriteableBitmap cropBmp = new WriteableBitmap((int)width, (int)height);
Stream pixStream = cropBmp.PixelBuffer.AsStream();
pixStream.Write(pixels, 0, (int)(width * height * 4));
return cropBmp;
}
}
XAML static way, if my screen size is 1366x768 & I want to clip center 400x300 image then I would do this.
<Image Source="Assets/img100.png" Stretch="Fill">
<Image.Clip>
<RectangleGeometry Rect="483,234,400,300" />
</Image.Clip>
</Image>
Dynamic way. It will make center clipping for all resolution, though height & width is fixed.
double _Height = 300, _Width = 400;
img.Clip = new RectangleGeometry
{
Rect = new Rect((Window.Current.Bounds.Width - _Width) / 2, (Window.Current.Bounds.Height - _Height) / 2, _Width, _Height)
};
Don't forget to checkout...
How to resize Image in C# WinRT/winmd?
Crop image with rectangle
Crop image with dynamic rectangle coordinate
Cropping tool after file picker (like the one after you take a picture)
Related
I am learning the use of libgdx and I got confused by the viewport and how objects are arranged on the screen. Let's assume my 2D world is 2x2 units wide and high. Now I create a camera which viewport is 1x1. So I should see 25% of my world. Usually displays are not square shaped. So I would expect libgdx to squish and stretch this square to fit the display.
For a side scroller you would set the viewport height the same as the world height and adjust the viewport width according to the aspect ratio. Independent of the aspect ratio of your display you always see the full height of the world but different expansions on the x-axis. Somebody with a wider than high display could look further on the x-axis than somebody with a square shaped display. But proportions will be maintained and there is no distortion. So far I thought I mastered how the viewport logic works.
I am working with the book "Learning LibGDX Game Development" in which you develop the game "canyon bunny". The source code can be found here:
Canyon Bunny - GitHub
In the WorldRenderer Class you find the initilization of the camera:
private void init() {
batch = new SpriteBatch();
camera = new OrthographicCamera(Constants.VIEWPORT_WIDTH, Constants.VIEWPORT_HEIGHT);
camera.position.set(0, 0, 0);
camera.update();
}
The viewport constants are saved in a separate Constants-Class:
public class Constants {
// Visible game world is 5 meters wide
public static final float VIEWPORT_WIDTH = 5.0f;
// Visible game world is 5 meters tall
public static final float VIEWPORT_HEIGHT = 5.0f;
}
As you can see the viewport is 5x5. But the game objects have the right proportion on my phone (16:9) and even on a desktop when you change the windows size the game maintains the correct proportions. I don't understand why. I would expect that the game tries to paint a square shaped cutout of the world onto a rectangle shaped display which would lead to distortion. Why is that not the case? And why don't you need the adaption of width or height of the viewport to the aspect ratio?
The line:
cameraGUI.setToOrtho(true);
Overrides the values you gave when you called:
cameraGUI = new OrthographicCamera(Constants.VIEWPORT_GUI_WIDTH, Constants.VIEWPORT_GUI_HEIGHT);
Here's the LibGDX code that shows why/how the viewport sizes you set were ignored:
/** Sets this camera to an orthographic projection using a viewport fitting the screen resolution, centered at
* (Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight()/2), with the y-axis pointing up or down.
* #param yDown whether y should be pointing down */
public void setToOrtho (boolean yDown) {
setToOrtho(yDown, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
}
/** Sets this camera to an orthographic projection, centered at (viewportWidth/2, viewportHeight/2), with the y-axis pointing up
* or down.
* #param yDown whether y should be pointing down.
* #param viewportWidth
* #param viewportHeight */
public void setToOrtho (boolean yDown, float viewportWidth, float viewportHeight) {
if (yDown) {
up.set(0, -1, 0);
direction.set(0, 0, 1);
} else {
up.set(0, 1, 0);
direction.set(0, 0, -1);
}
position.set(zoom * viewportWidth / 2.0f, zoom * viewportHeight / 2.0f, 0);
this.viewportWidth = viewportWidth;
this.viewportHeight = viewportHeight;
update();
}
So you would need to do this instead:
cameraGUI.setToOrtho(true, Constants.VIEWPORT_GUI_WIDTH, Constants.VIEWPORT_GUI_HEIGHT);
Also don't forget to call update() right after wherever you change the position or viewport dimensions of your camera (Or other properties)
I found the reason. If you take a look on the worldRenderer class there is a method resize(). In this method the viewport is adapted to the aspect ratio. I am just wondering because until now I thought the resize method is only called when resizing the window. Apparently it's also called at start up. Can anybody clarify?
So I'm developing a cross platform React Native app, the app is using allot of images as buttons as per design requirements that need to be given an initial height and width so that their aspect ratios are correct. From there I've built components that use these image buttons and then placed those components on the main screen. I can get things to look perfect on one screen by using tops and lefts/ rights to get the components positioned according to the design requirements that I've been given.
The problem I'm running into is now scaling this main screen for different screen sizes. I'm basically scaling the x and y via the transform property on the parent most view as such. transform: [{ scaleX: .8 }, { scaleY: .8 }] After writing a scaling function that accounts for a base height and current height this approach works for the actual size of things but my positioning is all screwy.
I know I'm going about this wrong and am starting to think that i need to rethink my approach but am stumped on how to get these components positioned correctly on each screen without having to hard code it.
Is there any way to position a view using tops and lefts/rights, lock that in place, then scale it more like an image?
First of all, try using flex as far as you can. Then when you need extra scaling for inner parts for example, you can use scale functions. I have been using a scale function based on the screen size and the pixel density, and works almost flawless so far.
import { Dimensions } from "react-native";
const { width, height } = Dimensions.get("window");
//Guideline sizes are based on standard ~5" screen mobile device
const guidelineBaseWidth = 350;
const guidelineBaseHeight = 680;
const screenSize = Math.sqrt(width * height) / 100;
const scale = size => (width / guidelineBaseWidth) * size;
const verticalScale = size => (height / guidelineBaseHeight) * size;
const moderateScale = (size, factor = 0.5) =>
size + (scale(size) - size) * factor;
export { scale, verticalScale, moderateScale, screenSize };
Then you can import these and use in your components. There are different types of scales, you can try and see the best one for your components.Hope that helps.
I ended up going through each view and converting everything that was using a hard coded height and width pixel to setting the width and then using the property aspectRatio and giving that the hard coded height and widths. That along with implementing a scaling function that gave me a fraction of the largest view, so say .9, and then scaling the main view using transform. People arent kidding when they say this responsive ui stuff is tough.
2022 update -
I resolved this problem on my next app by using flex everywhere & a function called rem that I use everywhere that needs a fixed pixel count. With this I can set the width on an image and define an aspect ratio based on the images original dimensions and get an image that scales to the screen size, it's been super reliable.
static width = Dimensions.get("window").width;
static height = Dimensions.get("window").height;
static orientation = 'PORTRAIT';
static maxWidth = 428;
static rem = size => {
let divisor = window.lockedToPortrait || Styles.orientation === 'PORTRAIT' ? Styles.width : Styles.height;
return Math.floor(size * (divisor / Styles.maxWidth))
};
The maxWidth is a predefined value from the largest device I could find to simulate which was probably an iPhone max.
There are other posts on Stack Overflow on pinch zooming, but I haven't found any helpful ones for OpenGL that do what I'm looking for. I am currently using the orthoM function to change the camera position and to do scaling in OpenGL. I have gotten the camera to move around, and have gotten pinch zooming to work, but the zooming always zooms into the center of the OpenGL surface view coordinate system at 0,0. After trying different things, I haven't found a way yet that allows the camera to move around, while also allowing pinch zooming to the user's touch point (as an example, the touch controls in Clash of Clans is similar to what I am trying to make).
(The method I'm currently using to get the scale value is based on this post.)
My first attempt:
// mX and mY are the movement offsets based on the user's touch movements,
// and can be positive or negative
Matrix.orthoM(mProjectionMatrix, 0, ((-WIDTH/2f)+mX)*scale, ((WIDTH/2f)+mX)*scale,
((-HEIGHT/2f)+mY)*scale, ((HEIGHT/2f)+mY)*scale, 1f, 2f);
In the above code, I realize that the camera moves towards the coordinate 0,0 because as scale gets increasingly smaller, the values for the camera edges decrease towards 0. So although the zoom goes towards the coordinate system center, the movement of the camera moves at the right speeds at any scale level.
So, I then edited the code to this:
Matrix.orthoM(mProjectionMatrix, 0, (-WIDTH/2f)*scale+mX, (WIDTH/2f)*scale+mX,
(-HEIGHT/2f)*scale+mY, (HEIGHT/2f)*scale+mY, 1f, 2f);
The edited code now makes the zoom go toward the center of the screen no matter where in the surface view coordinate system the camera is (although that isn't the full goal), but the camera movement is off, as the offset isn't adjusted for the different scale levels.
I'm still working to find a solution myself, but if anyone has any advice or ideas on how this could be implemented, I would be glad to hear.
Note, I don't think it matters, but I'm doing this in Android and using Java.
EDIT:
Since I first posted this question, I have made some changes to my code. I found this post, which explains the logic of how to pan the camera to the correct position based on the scale, so that the zoompoint remains in the same position.
My updated attempt:
// Only do the following if-block if two fingers are on the screen
if (zooming) {
// midPoint is a PointF object that stores the coordinate of the midpoint between
//two fingers
float scaleChange = scale - prevScale; // scale is the same as in my previous code
float offsetX = -(midPoint.x*scaleChange);
float offsetY = -(midPoint.y*scaleChange);
cameraPos.x += offsetX;
cameraPos.y += offsetY;
}
// cameraPos is a PointF object that stores the coordinate at the center of the screen,
// and replaces the previous values mX and mY
left = cameraPos.x-(WIDTH/2f)*scale;
right = cameraPos.x+(WIDTH/2f)*scale;
bottom = cameraPos.y-(HEIGHT/2f)*scale;
top = cameraPos.y+(HEIGHT/2f)*scale;
Matrix.orthoM(mProjectionMatrix, 0, left, right, bottom, top, 1f, 2f);
The code does work quite a bit better now, but it still isn't completely accurate. I tested how the code worked when panning was disabled, and the zooming worked sort of better. However, when the panning is enabled, the zooming doesn't focus in on the zoompoint at all.
I finally found a solution while working on another project, so I'll post (in simplest form possible) what worked for me in case this could help anyone by chance.
final float currentPointersDistance = this.calculateDistance(pointer1CurrentX, pointer1CurrentY, pointer2CurrentX, pointer2CurrentY);
final float zoomFactorMultiplier = currentPointersDistance/initialPointerDistance; //> Get an initial distance between two pointers before calling this
final float newZoomFactor = previousZoomFactor*zoomFactorMultiplier;
final float zoomFactorChange = newZoomFactor-previousZoomFactor; //> previousZoomFactor is the current value of the zoom
//> The x and y values of the variables are in scene coordinate form (not surface)
final float distanceFromCenterToMidpointX = camera.getCenterX()-currentPointersMidpointX;
final float distanceFromCenterToMidpointY = camera.getCenterY()-currentPointersMidpointY;
final float offsetX = -(distanceFromCenterToMidpointX*zoomFactorChange/newZoomFactor);
final float offsetY = -(distanceFromCenterToMidpointY*zoomFactorChange/newZoomFactor);
camera.setZoomFactor(newZoomFactor);
camera.translate(offsetX, offsetY);
initialPointerDistance = currentPointersDistance; //> Make sure to do this
Method used to calculate the distance between two pointers:
public float calculateDistance(float pX1, float pY1, float pX2, float pY2) {
float x = pX2-pX1;
float y = pY2-pY1;
return (float)Math.sqrt((x*x)+(y*y));
}
Camera class methods used above:
public float getXMin() {
return centerX-((centerX-xMin)/zoomFactor);
}
public float getYMin() {
return centerY-((centerY-yMin)/zoomFactor);
}
public float getXMax() {
return centerX+((xMax-centerX)/zoomFactor);
}
public float getYMax() {
return centerY+((yMax-centerY)/zoomFactor);
}
public void setZoomFactor(float pZoomFactor) {
zoomFactor = pZoomFactor;
}
public void translate(float pX, float pY) {
xMin += pX;
yMin += pY;
xMax += pX;
yMax += pY;
}
The orthoM() function is called like the following:
Matrix.orthoM(projectionMatrix, 0, camera.getXMin(), camera.getXMax(), camera.getYMin(), camera.getYMax(), near, far);
I'm searching for a program which detects the border of a image,
for example I have a square and the program detects the X/Y-Coords
Example:
alt text http://img709.imageshack.us/img709/1341/22444641.png
This is a very simple edge detector. It is suitable for binary images. It just calculates the differences between horizontal and vertical pixels like image.pos[1,1] = image.pos[1,1] - image.pos[1,2] and the same for vertical differences. Bear in mind that you also need to normalize it in the range of values 0..255.
But! if you just need a program, use Adobe Photoshop.
Code written in C#.
public void SimpleEdgeDetection()
{
BitmapData data = Util.SetImageToProcess(image);
if (image.PixelFormat != PixelFormat.Format8bppIndexed)
return;
unsafe
{
byte* ptr1 = (byte *)data.Scan0;
byte* ptr2;
int offset = data.Stride - data.Width;
int height = data.Height - 1;
int px;
for (int y = 0; y < height; y++)
{
ptr2 = (byte*)ptr1 + data.Stride;
for (int x = 0; x < data.Width; x++, ptr1++, ptr2++)
{
px = Math.Abs(ptr1[0] - ptr1[1]) + Math.Abs(ptr1[0] - ptr2[0]);
if (px > Util.MaxGrayLevel) px = Util.MaxGrayLevel;
ptr1[0] = (byte)px;
}
ptr1 += offset;
}
}
image.UnlockBits(data);
}
Method from Util Class
static public BitmapData SetImageToProcess(Bitmap image)
{
if (image != null)
return image.LockBits(
new Rectangle(0, 0, image.Width, image.Height),
ImageLockMode.ReadWrite,
image.PixelFormat);
return null;
}
If you need more explanation or algorithm just ask with more information without being so general.
It depends what you want to do with the border, if you are looking at getting just the values of the edges of the region, use an algorithm called the Connected Components Region. You must know the value of the region prior to using the algorithm. This will navigate around the border and collect the outside region. If you are trying to detect just the outside lines get the gradient of the image and it will reveal where the lines are. To do this convolve the image with an edge detection filter such as Prewitt, Sobel, etc.
You can use any image processing library such as Opencv. which is in c++ or python.
You should look for edge detection functions such as Canny edge detection.
Of course this would require some diving into image processing.
The example image you gave should be straight forward to detect, how noisy/varied are the images going to be?
A shape recognition algorithm might help you out, providing it has a solid border of some kind, and the background colour is a solid one.
From the sounds of it, you just want a blob extraction algorithm. After that, the lowest/highest values for x/y will give you the coordinates of the corners.
I want to draw an image over other without drawing its backgroud. The image that I want to draw it's a star. I want to put some stars over a map image.
The problem is that the star's image has a white backgroud and when I draw over the map the white background appears.
My method to draw the star is like this:
Graphics graphics = Graphics.FromImage(map);
Image customIcon = Image.FromFile("../../star.png");
graphics.DrawImage(customIcon, x, y);
I tried with transparent backgroud images (PNG and GIF formats), and it always draw something surrounding the star. How can I draw a star without its background?
The program is for Windows Mobile 5.0 and above, with Compact Framework 2.0 SP2 and C#.
I tried with this code:
Graphics g = Graphics.FromImage(mapa);
Image iconoPOI = (System.Drawing.Image)Recursos.imagenPOI;
Point iconoOffset = new Point(iconoPOI.Width, iconoPOI.Height);
System.Drawing.Rectangle rectangulo;
ImageAttributes transparencia = new ImageAttributes();
transparencia.SetColorKey(Color.White, Color.White);
rectangulo = new System.Drawing.Rectangle(x, y, iconoPOI.Width, iconoPOI.Height);
g.DrawImage(iconoPOI, rectangulo, x, y, iconoPOI.Width, iconoPOI.Height, GraphicsUnit.Pixel, transparencia);
But I don't see anything on map.
X and Y are de coordinates where I want to draw the iconoPOI which it's a PNG imagen with a white background.
Thank you!
One valid answer can be found here:
Answer
Thank you!
Normally this task is pretty complicated (you have to tap the windows API BitBlt function and create a black-and-white mask image and other stuff), but here's a simple way to do it.
Assuming you have one bitmap for your background image (bmpMap) and one for your star image (bmpStar), and you need to draw the star at (xoffset, yoffset), this method will do what you need:
for (int x = 0; x < bmpStar.Width; x++)
{
for (int y = 0; y < bmpStar.Height; y++)
{
Color pixel = bmpStar.GetPixel(x, y);
if (pixel != Color.White)
{
bmpMap.SetPixel(x + xoffset, y + yoffset, pixel);
}
}
}
SetPixel and GetPixel are incredibly slow (the preferred way is to use the bitmap's LockBits method - there are questions here on SO that explain how to use it), but this will get you started.