Is it possible to create Horizontal Bar Chart and draw bars from right to left? Just make mirror reflection. I can't use rotate in my xml because label also will be reversed.
You can change the render from HorizontalBarChartRenderer to your customed render, and override function :protected void drawDataSet(Canvas c, IBarDataSet dataSet, int index);
add these lines after " // initialize the buffer"
BarBuffer buffer = mBarBuffers[index];
buffer.setPhases(phaseX, phaseY);
buffer.setDataSet(index);
buffer.setInverted(mChart.isInverted(dataSet.getAxisDependency()));
buffer.setBarWidth(mChart.getBarData().getBarWidth());
buffer.feed(dataSet);
trans.pointValuesToPixel(buffer.buffer);
final boolean isSingleColor = dataSet.getColors().size() == 1;
if (isSingleColor) {
mRenderPaint.setColor(dataSet.getColor());
}
for (int j = 0; j < buffer.size(); j += 4) {
if (!mViewPortHandler.isInBoundsTop(buffer.buffer[j + 3]))
break;
if (!mViewPortHandler.isInBoundsBottom(buffer.buffer[j + 1]))
continue;
if (!isSingleColor) {
// Set the color for the currently drawn value. If the index
// is out of bounds, reuse colors.
mRenderPaint.setColor(dataSet.getColor(j / 4));
}
// NEWLY ADDED CODE BELOW
float left = buffer.buffer[j];
float right = buffer.buffer[j + 2];
float midLine = (mViewPortHandler.contentLeft() + mViewPortHandler.contentRight()) / 2;
buffer.buffer[j] = midLine + (midLine - right);
buffer.buffer[j + 2] = midLine + (midLine - left);
look like this:
Related
I have a rectangle with a sprite on it and I have to detect if the touch position lies within the rectangle.
This is my code,
if (Gdx.input.isTouched())
{
int x1 = Gdx.input.getX();
int y1 = Gdx.input.getY();
Vector3 inputs = new Vector3(x1, y1, 0);
gamecam.unproject(inputs);
Gdx.app.log("x" + inputs.x, "y" + inputs.y);
Gdx.app.log("rect" + rectangle.x, "rect" + rectangle.y);
if(rectangle.contains(inputs.x,inputs.y))
{
Gdx.app.log("x" + inputs.x, "y" + inputs.y);
Gdx.app.log("rect" + rectangle, "rect" + rectangle.y);
}
}
Rectangle definition,
BodyDef bdef = new BodyDef();
bdef.type = BodyDef.BodyType.StaticBody;
b2body = screen.getWorld().createBody(bdef);
rectangle = new Rectangle();
rectangle.setHeight(55);
rectangle.setWidth(55);
PolygonShape head = new PolygonShape();
rectangle.setX(300);
rectangle.setY(10);
bdef.position.set((rectangle.getX() - rectangle.getWidth() / 2) / MyJungleGame.PPM, (rectangle.getY() - rectangle.getHeight() / 2) / MyJungleGame.PPM);
head.setAsBox(rectangle.getWidth() / 2 / MyJungleGame.PPM, rectangle.getHeight() / 2 / MyJungleGame.PPM);
FixtureDef fdef = new FixtureDef();
fdef.shape = head;
setPosition(b2body.getPosition().x - getWidth() / 2, b2body.getPosition().y - getHeight() / 2);
This is my output,
The small rectangle at the bottom of the screen is the rectangle I created. But, nothing happens when I click it. I checked the coordinates and here is the log,
x-0.925: y-0.5625
rect300.0: rect10.0
x-0.925: y-0.5625
rect300.0: rect10.0
x-0.925: y-0.5625
I tried checking the touch using the below method,
if (inputs.x > sprite.getX() && inputs.x < sprite.getX() + sprite.getWidth())
{
if (inputs.y > sprite.getY() && inputs.y < sprite.getY() + sprite.getHeight())
{
Gdx.app.log("sprite touched", "");
}
}
This too doesn't work. Any idea where I made the mistake ? Please help . Thanks in advance
Since you are using Box2D, to detect collisions via the common way is more complicated to new users.
However, looking on your code...
I would advice taking this coordinates in consideration with PPM of your world :
int x1 = Gdx.input.getX();
int y1 = Gdx.input.getY();
Vector3 inputs = new Vector3(x1, y1, 0);
Also, If you are going to build a collision system with box2d, you should use this : http://www.aurelienribon.com/blog/2011/07/box2d-tutorial-collision-filtering/
I'm making a bidirectional path tracer and I have some troubles.
To be clear :
1) One point light
2) All objects are diffuse
3) All objects are spheres, even walls (they are very large)
4) NO MIS WEIGHTING
The light emission is a 3D vector. The BRDF of a sphere is a 3D vector. Hard coded.
In the main function below I generate EyePath and LightPath then I connect them. At least I try.
In this post I will talking about the main function then EyePath then LightPath. The talking about connecting function will appear once EyePath and Light are good.
First questions :
Does the generation of the first light point is good ?
Do I need to compute this point according to the emission of the light source? or is it just the emission ? The line is commented where i'm filling the Vertices structure.
Do I need to translate fromlight ? In order to put it on the sphere
The code below is sampled in the main function. Above it there is two for loops going through all pixels. Camera.o is the eye. CameraRayDir is the direction to the current pixel.
//The path light starting point is at the same position as the light
Ray fromLight(Vec(0, 24.3, 0), Vec());
Sphere light = spheres[7];
#define PDF 0.15915494309 // 1 / (2 * PI)
for(int i = 0; i < samps; ++i)
{
std::vector<Vertices> PathEye;
std::vector<Vertices> PathLight;
Vec cameraRayDir = cx * (double(x) / w - .5) + cy * (double(y) / h - .5) + camera.d;
Ray rayEye(camera.o, cameraRayDir.norm());
// Hemisphere oriented towards the top
fromLight.d = generateRayInHemisphere(fromLight.o,Vec(0,1,0)).d;
double f = clamp(n.dot(fromLight.d.norm()));
Vertices vert;
vert.d = fromLight.d;
vert.x = fromLight.o;
vert.id = 7;
vert.cos = f;
vert.n = Vec(0,1,0).norm();
// this one ?
//vert.couleur = spheres[7].e * f / PDF;
// Or this one ?
vert.couleur = spheres[7].e;
PathLight.push_back(vert);
int sizeEye = generateEyePath(PathEye, rayEye, maxDepth);
int sizeLight = generateLightPath(PathLight, fromLight, maxDepth);
for (int s = 0; s < sizeLight; ++s)
{
for (int t = 1; t < sizeEye; ++t)
{
int depth = t + s - 1;
if ((s == 0 && t == 0) || depth < 0 || depth > maxDepth)
continue;
pixelValue = pixelValue + connectPaths(PathEye, PathLight, s, t);
}
}
}
For the EyePath I intersect the geometry then I compute the illumination according to the distance with the light. The colour is black if the point is in the shadow.
Second question : For the eye path and the direct illumination, is the computation good ? I've seen in many code, people use the pdf even in direct illumination. But I'm only using point light and spheres.
int generateEyePath(std::vector<Vertices>& v, Ray eye, int maxDepth)
{
double t;
int id = 0;
Vertices vert;
int RussianRoulette;
while(v.size() <= maxDepth)
{
if(distribRREye(generatorRREye) < 10)
break;
// Intersect all the geometry
// id is the id of the intersected geometry in an array
intersect(eye, t, id);
const Sphere& obj = spheres[id];
// Intersection point
Vec x = eye.o + eye.d * t;
// normal
Vec n = (x - obj.p).norm();
Vec direction = light.p - x;
// Shadow ray
Ray RaytoLight = Ray(x, direction.norm());
const float distance = direction.length();
// shadow
const bool visibility = intersect(RaytoLight, t, id);
const Sphere &lumiere = spheres[id];
float degree = clamp(n.dot((lumiere.p - x).norm()));
// If the intersected geometry is not a light, then in shadow
if(lumiere.e.x == 0)
{
vert.couleur = Vec();
}
else // else we compute the colour
// obj.c is the brdf, lumiere.e is the emission
vert.couleur = (obj.c).mult(lumiere.e / (distance * distance)) * degree;
vert.x = x;
vert.id = id;
vert.n = n;
vert.d = eye.d.normn();
vert.cos = degree;
v.push_back(vert);
eye = generateRayInHemisphere(x,n);
}
return v.size();
}
For the LightPath, for a given point, I compute it according to the previous one and the values at this point. Like in a common path tracing.\n
Third question: Is the colour computation good ?
int generateLightPath(std::vector<Vertices>& v, Ray fromLight, int maxDepth)
{
double t;
int id = 0;
Vertices vert;
Vec previous;
while(v.size() <= maxDepth)
{
if(distribRRLight(generatorRRLight) < 10)
break;
previous = v.back().couleur;
intersect(fromLight, t, id);
// intersected geometry
const Sphere& obj = spheres[id];
// Intersection point
Vec x = fromLight.o + fromLight.d * t;
// normal
Vec n = (x - obj.p).norm();
double f = clamp(n.dot(fromLight.d.norm()));
// obj.c is the brdf
vert.couleur = previous.mult(((obj.c / M_PI) * f) / PDF);
vert.x = x;
vert.id = id;
vert.n = n;
vert.d = fromLight.d.norm();
vert.cos = f;
v.push_back(vert);
fromLight = generateRayInHemisphere(x,n);
}
return v.size();
}
For the moment I get this result.
enter image description here
The connecting function will come once EyePath and LightPath are good.
Thank you all
Try the spherical reference scene mentioned in this paper. I think then you can work out most of your questions by yourself since it has an analytical solution.
https://www.researchgate.net/publication/221546261_Testing_Monte-Carlo_Global_Illumination_Methods_with_Analytically_Computable_Scenes
It would save your time to implement and verify your understanding with path tracing and light tracing first, then try to combine them with weights.
I am checking if UIImage is darker or more whiter . I would like to use this method ,but only to check the third bottom part of the image ,not all of it .
I wonder how exactly to change it to check that,i am not that familiar with the pixels stuff .
BOOL isDarkImage(UIImage* inputImage){
BOOL isDark = FALSE;
CFDataRef imageData = CGDataProviderCopyData(CGImageGetDataProvider(inputImage.CGImage));
const UInt8 *pixels = CFDataGetBytePtr(imageData);
int darkPixels = 0;
long length = CFDataGetLength(imageData);
int const darkPixelThreshold = (inputImage.size.width*inputImage.size.height)*.25;
//should i change here the length ?
for(int i=0; i<length; i+=4)
{
int r = pixels[i];
int g = pixels[i+1];
int b = pixels[i+2];
//luminance calculation gives more weight to r and b for human eyes
float luminance = (0.299*r + 0.587*g + 0.114*b);
if (luminance<150) darkPixels ++;
}
if (darkPixels >= darkPixelThreshold)
isDark = YES;
I can just crop that part of the image, but this will be not efficient way, and wast time .
The solution marked correct here is a more thoughtful approach for getting the pixel data (more tolerant of differing formats) and also demonstrates how to address pixels. With a small adjustment, you can get the bottom of the image as follows:
+ (NSArray*)getRGBAsFromImage:(UIImage*)image
atX:(int)xx
andY:(int)yy
toX:(int)toX
toY:(int)toY {
// ...
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
int byteIndexEnd = (bytesPerRow * toY) + toX * bytesPerPixel;
while (byteIndex < byteIndexEnd) {
// contents of the loop remain the same
// ...
}
To get the bottom third of the image, call this with xx=0, yy=2.0*image.height/3.0 and toX and toY equal to the image width and height, respectively. Loop the colors in the returned array and compute luminance as your post suggests.
I just want to be able to do something when my skeletal joint (x,y,z) coordinates are over the x,y,z coordinates of the button . . I have the following code but somehow it doesnt work properly . .as soon as my hand moves it will do something without my hand reaching the button
if (skeletonFrame != null)
{
//int skeletonSlot = 0;
Skeleton[] skeletonData = new Skeleton[skeletonFrame.SkeletonArrayLength];
skeletonFrame.CopySkeletonDataTo(skeletonData);
Skeleton playerSkeleton = (from s in skeletonData where s.TrackingState == SkeletonTrackingState.Tracked select s).FirstOrDefault();
if (playerSkeleton != null)
{
Joint rightHand = playerSkeleton.Joints[JointType.HandRight];
handPosition = new Vector2((((0.5f * rightHand.Position.X) + 0.5f) * (640)), (((-0.5f * rightHand.Position.Y) + 0.5f) * (480)));
var rightHands = playerSkeleton.Joints[JointType.HandRight];
var rightHandsX = rightHands.Position.X;
var rightHandsY = rightHands.Position.Y;
var rightHandsZ = rightHands.Position.Z;
if (Math.Sqrt(Math.Pow(rightHandsX - position.X, 2) + Math.Pow(rightHandsY - position.Y, 2)) < 20)
{
// Exit();
}
if (Math.Sqrt(Math.Pow(rightHandsX - start_bttn.Bounds.X, 1) + Math.Pow(rightHandsY - start_bttn.Bounds.Y, 1)) < 10)
{
currentGameState = GameState.Selection;
// Exit();
}
if ((rightHandsX < GraphicsDevice.Viewport.Width / 2 + 150 && rightHandsX > GraphicsDevice.Viewport.Width / 2 - 75) && (rightHandsY > GraphicsDevice.Viewport.Height / 2 && rightHandsY < GraphicsDevice.Viewport.Height / 2 + 50))
{
currentGameState = GameState.Selection;
}
}
Here is my hand tracking function. See if it does what you want, or gets you closer...
private void TrackHandMovement(Skeleton skeleton)
{
Joint leftHand = skeleton.Joints[JointType.HandLeft];
Joint rightHand = skeleton.Joints[JointType.HandRight];
Joint leftShoulder = skeleton.Joints[JointType.ShoulderLeft];
Joint rightShoulder = skeleton.Joints[JointType.ShoulderRight];
Joint rightHip = skeleton.Joints[JointType.HipRight];
// the right hand joint is being tracked
if (rightHand.TrackingState == JointTrackingState.Tracked)
{
// the hand is sufficiently in front of the shoulder
if (rightShoulder.Position.Z - rightHand.Position.Z > 0.4)
{
double xScaled = (rightHand.Position.X - leftShoulder.Position.X) / ((rightShoulder.Position.X - leftShoulder.Position.X) * 2) * SystemParameters.PrimaryScreenWidth;
double yScaled = (rightHand.Position.Y - rightShoulder.Position.Y) / (rightHip.Position.Y - rightShoulder.Position.Y) * SystemParameters.PrimaryScreenHeight;
// the hand has moved enough to update screen position (jitter control / smoothing)
if (Math.Abs(rightHand.Position.X - xPrevious) > MoveThreshold || Math.Abs(rightHand.Position.Y - yPrevious) > MoveThreshold)
{
RightHandX = xScaled;
RightHandY = yScaled;
xPrevious = rightHand.Position.X;
yPrevious = rightHand.Position.Y;
// reset the tracking timer
trackingTimerCounter = 10;
}
}
}
}
There is a bit of math in there to translate the hand position to the screen position. Different strokes for different folks, but my logic is:
Shoulders = top of screen
Hips = bottom of screen
Left Should = left most on screen
To get the right most screen position, I take the distance between the left & right shoulder and add it to the right shoulder.
So I have a function that takes two MKMapRect's and the second intersects with the first one. So the function creates an MKPolygon that is the first rect without the intersecting parts:
-(void) polygons:(MKMapRect)fullRect exclude:(MKMapRect)excludeArea{
NSLog(#"Y is: %f height: %f",excludeArea.origin.y,excludeArea.size.height);
double top = excludeArea.origin.y - fullRect.origin.y;
double lft = excludeArea.origin.x - fullRect.origin.x;
double btm = (fullRect.origin.y + fullRect.size.height) - (excludeArea.origin.y + excludeArea.size.height);
double rgt = (fullRect.origin.x + fullRect.size.width) - (excludeArea.origin.x + excludeArea.size.width);
double ot = fullRect.origin.y, it = (ot + top);
double ol = fullRect.origin.x, il = (ol + lft);
double ob = (fullRect.origin.y + fullRect.size.height), ib = (ob - btm);
double or = (fullRect.origin.x + fullRect.size.width), ir = (or - rgt);
MKMapPoint points[11] = {{ol,it}, {ol,ot}, {or,ot}, {or,ob}, {ol,ob}, {ol,it}, {il,it}, {ir,it}, {ir,ib}, {il,ib}, {il,it}};
MKPolygon *polygon = [MKPolygon polygonWithPoints:points count:11];
}
And my question is now how do I get the minimum number of MKMapRects from this MKPolygon? I have done some googling as well as looking through the forum but havn't found anything.
EDIT:
So the goal is the following:
I have a MKMapRect rect1, then I have a list of rectangles, rectList, which is MKMapRects intersecting with rect1 and what I want to do is create a rectilinear MKPolygon of rect1, remove the surface of all MKMapRects in rectList from rect1 and then create the minimum number of MKMaprects from the created rectilinear MKPolygon.
Right now the problem is the following: I am able to create a polygon when removing one MKMapRect from rect1 but I dont know how to remove the following maprects from rect1 and I dont know how to extract the minimum set of MkMapRects from the polygon created.
Best regards
Peep
I'm not sure if this is what you're looking for or if I understand the question fully, but if all you need to know is the minimum number of rectangles in a polygon that's created by subtracting one rectangle from another you should be able to do it by checking the number of corner points in the second rectangle that are contained in the first rectangle. In pseudo code:
int minNumRects(MKRect r1, MKRect r2) {
int numPointsContained = 0;
for (Point p in r2) {
if (MKMapRectContainsPoint(r1, p)) {
numPointsContained++;
}
}
if (numPointsContained == 1) {
return 2;
} else if (numPointsContained == 2) {
return 3;
} else if (numPointsContained == 4) {
return 4;
} else {
return 0;
}
}
P.S. - This assumes that the rectangles are axis-aligned but as far as I know that's the case with MKRects