java2d dash Pattern - java-2d

i need solution for making animated selection tool with java2d.
i know BasicStroke and Rectagnle2D API but i don't idea for making black and white dashes
and animated this.anybody have idea for this work ?
thanks

void paint(Graphics2D g) {
//define these constants yourself
float dashWidth;
float offset;
//draw solid, i.e. background
g.setColor(Color.WHITE);
g.setStroke(new BasicStroke(width, cap, join, miter, null));
g.drawLine(x1, y1, x2, y2);
//draw the pattern on top
float[] pattern = new float[] {dashWidth, dashWidth*2}
g.setColor(Color.BLACK);
g.setStroke(new BasicStroke(width, cap, join, miter, pattern, offset));
g.drawLine(x1, y1, x2, y2);
}
This works with any shape, so replace drawLine with drawRect if that's what you need. To animate, switch colors and repaint.

Related

How to pinch zoom into a certain location in OpenGL ES 2?

There are other posts on Stack Overflow on pinch zooming, but I haven't found any helpful ones for OpenGL that do what I'm looking for. I am currently using the orthoM function to change the camera position and to do scaling in OpenGL. I have gotten the camera to move around, and have gotten pinch zooming to work, but the zooming always zooms into the center of the OpenGL surface view coordinate system at 0,0. After trying different things, I haven't found a way yet that allows the camera to move around, while also allowing pinch zooming to the user's touch point (as an example, the touch controls in Clash of Clans is similar to what I am trying to make).
(The method I'm currently using to get the scale value is based on this post.)
My first attempt:
// mX and mY are the movement offsets based on the user's touch movements,
// and can be positive or negative
Matrix.orthoM(mProjectionMatrix, 0, ((-WIDTH/2f)+mX)*scale, ((WIDTH/2f)+mX)*scale,
((-HEIGHT/2f)+mY)*scale, ((HEIGHT/2f)+mY)*scale, 1f, 2f);
In the above code, I realize that the camera moves towards the coordinate 0,0 because as scale gets increasingly smaller, the values for the camera edges decrease towards 0. So although the zoom goes towards the coordinate system center, the movement of the camera moves at the right speeds at any scale level.
So, I then edited the code to this:
Matrix.orthoM(mProjectionMatrix, 0, (-WIDTH/2f)*scale+mX, (WIDTH/2f)*scale+mX,
(-HEIGHT/2f)*scale+mY, (HEIGHT/2f)*scale+mY, 1f, 2f);
The edited code now makes the zoom go toward the center of the screen no matter where in the surface view coordinate system the camera is (although that isn't the full goal), but the camera movement is off, as the offset isn't adjusted for the different scale levels.
I'm still working to find a solution myself, but if anyone has any advice or ideas on how this could be implemented, I would be glad to hear.
Note, I don't think it matters, but I'm doing this in Android and using Java.
EDIT:
Since I first posted this question, I have made some changes to my code. I found this post, which explains the logic of how to pan the camera to the correct position based on the scale, so that the zoompoint remains in the same position.
My updated attempt:
// Only do the following if-block if two fingers are on the screen
if (zooming) {
// midPoint is a PointF object that stores the coordinate of the midpoint between
//two fingers
float scaleChange = scale - prevScale; // scale is the same as in my previous code
float offsetX = -(midPoint.x*scaleChange);
float offsetY = -(midPoint.y*scaleChange);
cameraPos.x += offsetX;
cameraPos.y += offsetY;
}
// cameraPos is a PointF object that stores the coordinate at the center of the screen,
// and replaces the previous values mX and mY
left = cameraPos.x-(WIDTH/2f)*scale;
right = cameraPos.x+(WIDTH/2f)*scale;
bottom = cameraPos.y-(HEIGHT/2f)*scale;
top = cameraPos.y+(HEIGHT/2f)*scale;
Matrix.orthoM(mProjectionMatrix, 0, left, right, bottom, top, 1f, 2f);
The code does work quite a bit better now, but it still isn't completely accurate. I tested how the code worked when panning was disabled, and the zooming worked sort of better. However, when the panning is enabled, the zooming doesn't focus in on the zoompoint at all.
I finally found a solution while working on another project, so I'll post (in simplest form possible) what worked for me in case this could help anyone by chance.
final float currentPointersDistance = this.calculateDistance(pointer1CurrentX, pointer1CurrentY, pointer2CurrentX, pointer2CurrentY);
final float zoomFactorMultiplier = currentPointersDistance/initialPointerDistance; //> Get an initial distance between two pointers before calling this
final float newZoomFactor = previousZoomFactor*zoomFactorMultiplier;
final float zoomFactorChange = newZoomFactor-previousZoomFactor; //> previousZoomFactor is the current value of the zoom
//> The x and y values of the variables are in scene coordinate form (not surface)
final float distanceFromCenterToMidpointX = camera.getCenterX()-currentPointersMidpointX;
final float distanceFromCenterToMidpointY = camera.getCenterY()-currentPointersMidpointY;
final float offsetX = -(distanceFromCenterToMidpointX*zoomFactorChange/newZoomFactor);
final float offsetY = -(distanceFromCenterToMidpointY*zoomFactorChange/newZoomFactor);
camera.setZoomFactor(newZoomFactor);
camera.translate(offsetX, offsetY);
initialPointerDistance = currentPointersDistance; //> Make sure to do this
Method used to calculate the distance between two pointers:
public float calculateDistance(float pX1, float pY1, float pX2, float pY2) {
float x = pX2-pX1;
float y = pY2-pY1;
return (float)Math.sqrt((x*x)+(y*y));
}
Camera class methods used above:
public float getXMin() {
return centerX-((centerX-xMin)/zoomFactor);
}
public float getYMin() {
return centerY-((centerY-yMin)/zoomFactor);
}
public float getXMax() {
return centerX+((xMax-centerX)/zoomFactor);
}
public float getYMax() {
return centerY+((yMax-centerY)/zoomFactor);
}
public void setZoomFactor(float pZoomFactor) {
zoomFactor = pZoomFactor;
}
public void translate(float pX, float pY) {
xMin += pX;
yMin += pY;
xMax += pX;
yMax += pY;
}
The orthoM() function is called like the following:
Matrix.orthoM(projectionMatrix, 0, camera.getXMin(), camera.getXMax(), camera.getYMin(), camera.getYMax(), near, far);

Qt5 QtChart drop vertical lines while using QScatterSeries

When I am using QScatterSeries, I can very easily draw point at (x, y). However, instead of points I would like to draw short lines, like in the figure below. How can I get about doing so?
I tried using RectangleMarker, but it just draws a fat square. I would prefer a thin line about 2px wide and 20px in height.
Is there a way I can add custom marker shapes?
Here are the code and the settings I use to transform my points into lines :
//create scatter series to draw point
m_pSeries1 = new QtCharts::QScatterSeries();
m_pSeries1->setName("trig");
m_pSeries1->setMarkerSize(100.0);
//draw a thin rectangle (50 to 50)
QPainterPath linePath;
linePath.moveTo(50, 0);
linePath.lineTo(50, 100);
linePath.closeSubpath();
//adapt the size of the image with the size of your rectangle
QImage line1(100, 100, QImage::Format_ARGB32);
line1.fill(Qt::transparent);
QPainter painter1(&line1);
painter1.setRenderHint(QPainter::Antialiasing);
painter1.setPen(QColor(0, 0, 0));
painter1.setBrush(painter1.pen().color());
painter1.drawPath(linePath);
//attach your image of rectangle to your series
m_pSeries1->setBrush(line1);
m_pSeries1->setPen(QColor(Qt::transparent));
//then use the classic QtChart pipeline...
You can play the marker size, the dimension of the image and the drawing pattern in the painter to adapt the size and shape of the rectangle to obtain a line.
In the picture, it's the black line. As you can see you can repeat the process for other series.
Keep in mind that you cannot use the openGL acceleration:
m_pSeries0->setUseOpenGL(true);
My work is based on the QtCharts/QScatterSeries example : QScatterSeries example
Hope it will help you.
Florian

Keeping an object made in OpenGL within the boundaries a window

I have been working on a game using objective c and OpenGL. I know how to create the object and how to make it move the way I want, but I cannot keep it within the window. How do you keep the object within the window?
OpenGL FAQ, section 8.070: How can I automatically calculate a view that displays my entire model?:
The following is from a posting by Dave Shreiner on setting up a basic
viewing system:
First, compute a bounding sphere for all objects in your scene. This
should provide you with two bits of information: the center of the
sphere (let ( c.x, c.y, c.z ) be that point) and its diameter (call it
"diam").
Next, choose a value for the zNear clipping plane. General guidelines
are to choose something larger than, but close to 1.0. So, let's say
you set:
zNear = 1.0;
zFar = zNear + diam;
Structure your matrix calls in this order (for an Orthographic projection):
GLdouble left = c.x - diam;
GLdouble right = c.x + diam;
GLdouble bottom c.y - diam;
GLdouble top = c.y + diam;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(left, right, bottom, top, zNear, zFar);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
This approach should center your objects in the middle of the window and stretch them to fit (i.e., its assuming that you're using a
window with aspect ratio = 1.0). If your window isn't square, compute
left, right, bottom, and top, as above, and put in the following logic
before the call to glOrtho():
GLdouble aspect = (GLdouble) windowWidth / windowHeight;
if ( aspect < 1.0 ) { // window taller than wide
bottom /= aspect;
top /= aspect;
} else {
left *= aspect;
right *= aspect;
}
The above code should position the objects in your scene appropriately. If you intend to manipulate (i.e. rotate, etc.), you
need to add a viewing transform to it.
A typical viewing transform will go on the ModelView matrix and might
look like this:
gluLookAt(0., 0., 2.*diam,
c.x, c.y, c.z,
0.0, 1.0, 0.0);
a "look at" camera is a very convenient technique to make a viewer track an object (hence however the object or camera point moves, the object will stay onscreen). see 'gluLookAt' for a commonly provided implementation, most 3d helper code libraries will have one. You give it a desired camera point, object of interest, and 'up-vector', and it will create an appropriate world (camera transformation) matrix.
Otherwise if you're in control of the object, just don't move it outside of the initial frustum.

Why do my raytraced spheres have dark lines when lit with multiple light sources?

I have a simple raytracer that only works back up to the first intersection. The scene looks OK with two different light sources, but when both lights are in the scene, there are dark shadows where the lit area from one ends, even if in the middle of a lit area from the other light source (particularly noticeable on the green ball). The transition from the 'area lit by both light sources' to the 'area lit by just one light source' seems to be slightly darker than the 'area lit by just one light source'.
The code where I'm adding the lighting effects is:
// trace lights
for ( int l=0; l<primitives.count; l++) {
Primitive* p = [primitives objectAtIndex:l];
if (p.light)
{
Sphere * lightSource = (Sphere *)p;
// calculate diffuse shading
Vector3 *light = [[Vector3 alloc] init];
light.x = lightSource.centre.x - intersectionPoint.x;
light.y = lightSource.centre.y - intersectionPoint.y;
light.z = lightSource.centre.z - intersectionPoint.z;
[light normalize];
Vector3 * normal = [[primitiveThatWasHit getNormalAt:intersectionPoint] retain];
if (primitiveThatWasHit.material.diffuse > 0)
{
float illumination = DOT(normal, light);
if (illumination > 0)
{
float diff = illumination * primitiveThatWasHit.material.diffuse;
// add diffuse component to ray color
colour.red += diff * primitiveThatWasHit.material.colour.red * lightSource.material.colour.red;
colour.blue += diff * primitiveThatWasHit.material.colour.blue * lightSource.material.colour.blue;
colour.green += diff * primitiveThatWasHit.material.colour.green * lightSource.material.colour.green;
}
}
[normal release];
[light release];
}
}
How can I make it look right?
It's a perceptual effect called Mach banding.
You are also very likely viewing the images in the wrong color space. Your ray tracer is doing the lighting math in a "linear" space, but then you are almost certainly viewing those images on a display with a nonlinear response, and therefore not even seeing the correct results. This could easily be making the Mach bands much more prominent than if you were displaying them properly. Try learning about gamma correction.
Your eyes are decieving you. If you move the spheres from the 3 pictures together you will very clearly see that the areas are the same color when single light and brighter when double lit. If you want to make it look nicer I suggest you add a whole arc of light sources between the current ones.
You've saturated one colour channel in the image; turn down the brightness a bit and see what happens.
Are you sure your lighting directions are both normalized?
May be worth it to throw an assert in there.

.NET CF 2.0: Draw an PNG over a image with background transparency

I want to draw an image over other without drawing its backgroud. The image that I want to draw it's a star. I want to put some stars over a map image.
The problem is that the star's image has a white backgroud and when I draw over the map the white background appears.
My method to draw the star is like this:
Graphics graphics = Graphics.FromImage(map);
Image customIcon = Image.FromFile("../../star.png");
graphics.DrawImage(customIcon, x, y);
I tried with transparent backgroud images (PNG and GIF formats), and it always draw something surrounding the star. How can I draw a star without its background?
The program is for Windows Mobile 5.0 and above, with Compact Framework 2.0 SP2 and C#.
I tried with this code:
Graphics g = Graphics.FromImage(mapa);
Image iconoPOI = (System.Drawing.Image)Recursos.imagenPOI;
Point iconoOffset = new Point(iconoPOI.Width, iconoPOI.Height);
System.Drawing.Rectangle rectangulo;
ImageAttributes transparencia = new ImageAttributes();
transparencia.SetColorKey(Color.White, Color.White);
rectangulo = new System.Drawing.Rectangle(x, y, iconoPOI.Width, iconoPOI.Height);
g.DrawImage(iconoPOI, rectangulo, x, y, iconoPOI.Width, iconoPOI.Height, GraphicsUnit.Pixel, transparencia);
But I don't see anything on map.
X and Y are de coordinates where I want to draw the iconoPOI which it's a PNG imagen with a white background.
Thank you!
One valid answer can be found here:
Answer
Thank you!
Normally this task is pretty complicated (you have to tap the windows API BitBlt function and create a black-and-white mask image and other stuff), but here's a simple way to do it.
Assuming you have one bitmap for your background image (bmpMap) and one for your star image (bmpStar), and you need to draw the star at (xoffset, yoffset), this method will do what you need:
for (int x = 0; x < bmpStar.Width; x++)
{
for (int y = 0; y < bmpStar.Height; y++)
{
Color pixel = bmpStar.GetPixel(x, y);
if (pixel != Color.White)
{
bmpMap.SetPixel(x + xoffset, y + yoffset, pixel);
}
}
}
SetPixel and GetPixel are incredibly slow (the preferred way is to use the bitmap's LockBits method - there are questions here on SO that explain how to use it), but this will get you started.