C++ Plot a float variable, opposed to int, onto panel - c++-cli

I have the following code which plots variables to a panel graphically.
Point point1 = Point(20, height);
Point point2 = Point(20, 0);
buffGraphics->DrawLine(System::Drawing::Pens::Blue, point1, point2);
However, this is just a test and want to be able to plot float variables as I need to build a graph. How can you plot a float to a panel / represent one?

You should use PointF instead of Point. DrawLine works with it as well and PointF accepts floats.

You shouldn't really use C style casts like such:
Point point1 = Point((int)x, (int)y)
They are unsafe and hard too spot/read. Instead use the following:
Point point1 = Point(static_cast<int>(x), static_cast<int>(y));
Alternatively you could modify the Point class too have methods returning the integral value. The benefit of this approach is that you could add extra functionality like the ceil and floor functions, while not having to create a temporary copy of a Point; It would look something like this:
int xtoi() const { return static_cast<int>(x); }
int ytoi() const { return static_cast<int>(y); }
int xtoi_ceil() const { return static_cast<int>(ceil(x)); }
int xtoi_floor() const { return static_cast<int>(floor(x)); }
...
Point ptoi() const { return Point(static_cast<int>(x), static_cast<int>(x)); }
...

Related

Is there a way to be able to flip an FlxCamera to the x axis?

setScale() doesn't work either.
I was going to flip the camera for when something interesting happens, but I don't how how.
You could flip the camera (and apply other interesting effects) with shaders!
FlxCamera has a function called setFilters() that allows you to add a list of bitmap filters to an active camera.
Here's a simple filter I wrote that flips all textures horizontally:
import openfl.filters.ShaderFilter;
var filters:Array<BitmapFilter> = [];
// Add a filter that flips everything horizontally
var filter = new ShaderFilter(new FlipXAxis());
filters.push(filter);
// Apply filters to camera
FlxG.camera.setFilters(filters);
And in a separate class called FlipXAxis:
import flixel.system.FlxAssets.FlxShader;
class FlipXAxis extends FlxShader
{
#:glFragmentSource('
#pragma header
void main()
{
vec2 uv = vec2(1.0 - openfl_TextureCoordv.x, openfl_TextureCoordv.y);
gl_FragColor = texture2D(bitmap, uv);
}')
public function new()
{
super();
}
}

Built-in variables not usable in certain cases (Processing 3)

I've been building a program with Processing 3 the last several days (first time going back to Processing since Intro to Computer Science in 2009) and kept having this issue:
public class PolarMap {
...
PVector[][] mapping = new PVector[width][height];
PVector[][] cartesian = new PVector[width][height];
PVector cart = new PVector();
PVector polar = new PVector();
/**
Maps every pixel on the cartesian plane to a polar coordinate
relative to some origin point.
*/
public void Map(float originX, float originY){
for (int x=0; x < width; x++){
for (int y=0; y < height; y++){
...
cart.add(x, y);
polar.add(r, theta);
mapping[x][y] = polar; ***
cartesian[x][y] = cart;
}
}
}
...
}
On the line with the ***, I would always get an Array Index Out Of Bounds thrown. I searched SO, Reddit, and Processing's own documentation to figure out why. If you're not familiar with Processing, width and height are both built-in variables and are equal to the number of pixels high and across your canvas is as declared in the setup() method (800x800 in my case). For some reason, both arrays were not being initialized to this value--instead, they were initializing to the default value of those variables: 100.
So, because it made no sense but it was one of those times, I tried declaring new variables:
int high = height;
int wide = width;
and initialized the array with those variables. And wouldn't you know it, that solved the problem. I now have two 800x800 arrays.
So here's my question: WHY were the built-in variables not working as expected when used to initialize the arrays, but did exactly what they were supposed to when assigned to a defined variable?
Think about when the width and height variables get their values. Consider this sample sketch:
int value = width;
void setup(){
size(500, 200);
println(value);
}
If you run this program, you'll see that it prints 100, even though the window is 500 pixels wide. This is because the int value = width; line is happening before the width is set!
For this to work how you'd expect, you have to set the value variable after the size() function is called. So you could do this:
int value;
void setup(){
size(500, 200);
value = width;
println(value);
}
Move any initializations to inside the setup() function, after the size() function is called, and you'll be fine.

Unity Input Touch issue

Could you please advise how i would go about using the input touch function in Unity to make an object changes its x direction every time the user tap on the screen. For example, for 2d setting game, an object is moving forward (to the right) in the x position, if the user tap then the object would move backward in the x position (to the left). Sorry no code is produced.
It's simple as your name "tony" :)
What you can do is to make a simple script which'd move your object to left and right. And on screen touch you can easily change its direction by just -1 multiplication.
Simple script that you can attach to your object.
using UnityEngine;
using System.Collections;
public class MoveObject : MonoBehaviour
{
float _limit = 5;
// 1 for right and -1 for left.
float _direction = 1;
// You can call it as speed
float _speed = 0.01f;
void Start ()
{
}
void Update ()
{
transform.position = Vector3.MoveTowards (transform.position, new Vector3 (transform.position.x + _direction, transform.position.y, transform.position.z), _speed);
if (Input.GetMouseButtonDown (0))
_direction *= -1;
}
}
Hope this helps :)

How Can I merge complex shapes stored in an ArrayList with Geomerative Library

I store shapes of this class:
class Berg{
int vecPoint;
float[] shapeX;
float[] shapeY;
Berg(float[] shapeX, float[] shapeY, int vecPoint){
this.shapeX = shapeX;
this.shapeY = shapeY;
this.vecPoint = vecPoint;
}
void display(){
beginShape();
curveVertex(shapeX[vecPoint-1], shapeY[vecPoint-1]);
for(int i=0;i<vecPoint;i++){
curveVertex(shapeX[i], shapeY[i]);
}
curveVertex(shapeX[0],shapeY[0]);
curveVertex(shapeX[1],shapeY[1]);
endShape();
}
}
in an ArrayList with
shapeList.add(new Berg(xBig,yBig,points));
The shapes are defined with eight (curveVertex-)points (xBig and yBig) forming a shape around a randomly positioned center.
After checking if the shapes are intersecting I want to merge the shapes that overlap each other. I already have the detection of the intersection working but struggle to manage the merging.
I read that the library Geomerative has a way to do something like that with union() but RShapes are needed as parameters.
So my question is: How can I change my shapes into the required RShape type? Or more general (maybe I did some overall mistakes): How Can I merge complex shapes stored in an ArrayList with or without Geomerative Library?
Take a look at the API for RShape: http://www.ricardmarxer.com/geomerative/documentation/geomerative/RShape.html
That lists the constructors and methods you can use to create an RShape out of a series of points. It might look something like this:
class Berg{
public RShape toRShape(){
RShape rShape = new rShape();
for(int i = 0; i < shapeX; i++){
rShape.addLineto(shapeX[i], shapeY[i]);
}
}
}

What's the transform parameter in b2PolygonShape::TestPoint(p1, p2)?

I'm very new to box2d and I just want to make a simple check to see if a point is inside a polygon in cocos2d.
b2PolygonShape polygon;
b2Vec2 vertices[] =
{
b2Vec2(300, 400),
b2Vec2(350, 400),
b2Vec2(300, 500),
b2Vec2(350, 500)
};
polygon.Set(vertices, 4);
if(polygon.TestPoint(b2Transform(), b2Vec2(301, 405)))
{
CCLOG(#"Point is inside");
}
I dont understand what the first parameter expecting a b2transform. Why is this needed and what should I set it to? Is there something im forgetting? Im trying to do this without doing anything complicated at all like having a worldobject and so on. What's the easiest way?
bool TestPoint(const b2Transform& transform, const b2Vec2& p) const;
The transform allows you to specifiy the polygon in local coordinates, and then transform it (translate and rotate) it to its desired position/orientation. If you want want the polygon vertices to be the worldspace coordinates, use an identity (like multiplying with 1) transform:
btTransform identity; identity.SetIdentity();
polygon.TestPoint(identity, ...
You need to explicity set it to identity, as the default constructor in both b2Transform and its two members b2Vec and b2Rot don't do anything, an therefore will contain random junk in the release build (debug usually sets all un-inted values to 0).
See b2Math.h and b2PolygonShape.cpp for details.