zedgraph: how to get dashed lines independant of physical graph size or # of data points - zedgraph

when i plot my data, the dashes are only visible if: the # of data points is small, or if i manually widen the window, or if i zoom in on the graph. my expectation is that i'd see dashes regardless of these factors, as you'd get in excel. am i overlooking a zedgraph config? thanks very much.
void plot_array(ref ZedGraphControl zgc)
{
int num_samples = 100;
double[] xvals = new double[num_samples];
double[] yvals = new double[num_samples];
for (double i = 0; i < num_samples; i++)
{
xvals[(int)i] = i / 10;
yvals[(int)i] = Math.Sin(i / 10);
}
var lineItem = zgc.GraphPane.AddCurve("Can't see the dashes", xvals, yvals, Color.Black);
lineItem.Line.Style = System.Drawing.Drawing2D.DashStyle.Custom;
lineItem.Line.DashOn = 10;
lineItem.Line.DashOff = 10;
lineItem.Symbol.Type = SymbolType.None;
zgc.AxisChange();
zgc.Refresh();
}

Obvious , nothing is wrong with your settings, everything is normal since you have lot of data the dashed line tends to (look) straight line.
if you try:
lineItem.Line.DashOn = 1;
lineItem.Line.DashOff = 10;
it solves your problem

Related

Directx11 heightmap texture real-time modification problem

I'm making a terrain tool.
I made a 2D texture and am using it as a height map.
I want to change a specific part of the heightmap, but I'm having a problem.
I changed certain small parts, but the whole landscape of the texture is changed.
I would like to know the cause of this problem and how to solve it
thank you.
`HeightMap ShaderResourceView Create Code
void TerrainRenderer::BuildHeightmapSRV(ID3D11Device* device)
{
ReleaseCOM(mHeightMapSRV);
ReleaseCOM(m_hmapTex);
D3D11_TEXTURE2D_DESC texDesc;
texDesc.Width = m_terrainData.HeightmapWidth; //basic value 2049
texDesc.Height = m_terrainData.HeightmapHeight; //basic value 2049
texDesc.MipLevels = 1;
texDesc.ArraySize = 1;
texDesc.Format = DXGI_FORMAT_R16_FLOAT;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D11_USAGE_DYNAMIC;
texDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
texDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
texDesc.MiscFlags = 0;
// HALF is defined in xnamath.h, for storing 16-bit float.
std::vector<HALF> hmap(mHeightmap.size());
//current mHeightmap is all zero.
std::transform(mHeightmap.begin(), mHeightmap.end(), hmap.begin(), XMConvertFloatToHalf);
D3D11_SUBRESOURCE_DATA data;
data.pSysMem = &hmap[0];
data.SysMemPitch = m_terrainData.HeightmapWidth * sizeof(HALF);
data.SysMemSlicePitch = 0;
HR(device->CreateTexture2D(&texDesc, &data, &m_hmapTex));
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
srvDesc.Format = texDesc.Format;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MostDetailedMip = 0;
srvDesc.Texture2D.MipLevels = -1;
HR(device->CreateShaderResourceView(m_hmapTex, &srvDesc, &mHeightMapSRV));
}
`HeightMap Texture modifying code
D3D11_MAPPED_SUBRESOURCE mappedData;
//m_hmapTex is ID3D11Texture2D*
HR(m_texMgr.m_context->Map(m_hmapTex, D3D11CalcSubresource(0, 0, 1), D3D11_MAP_WRITE_DISCARD, 0, &mappedData));
HALF* heightMapData = reinterpret_cast<HALF*>(mappedData.pData);
D3D11_TEXTURE2D_DESC heightmapDesc;
m_hmapTex->GetDesc(&heightmapDesc);
UINT width = heightmapDesc.Width;
for (int row = 0; row < width/4; ++row)
{
for (int col = 0; col < width/4; ++col)
{
idx = (row * width) + col;
heightMapData[idx] = static_cast<HALF>(XMConvertFloatToHalf(200));
}
}
m_texMgr.m_context->Unmap(m_hmapTex, D3D11CalcSubresource(0,0,1));
Please refer to the picture below
The lower right area renders the HeightMap texture.
I wanted to edit only 1/4 width and height, but that's all changed.
enter image description here
When the completed heightmap is applied, it works normally.
enter image description here
A texture does not always have the same width and height in memory as the definition suggests. Some textures strides (lines) are oversized. You have to use the Stride Size * Row to calculate the offset to write into.

Optimizing code to generate static

I am learning p5.js and wanted to generate a "static/noise texture" like so:
This is the code:
for (let y = 0; y < height; y++) {
for (let x = 0; x < width; x++) {
noiseVal = random(0,1);
stroke(255, noiseVal*255);
point(x,y);
}
}
This produces the desired outcome but it's obviously pretty slow since it has to iterate over every single pixel. What would be a more efficient way of doing this?
Your code is really not the best way to do with p5.js.
Take a look to the p5's pixels array.
When I run the following code, the function that use the pixels array run 100 times faster.
function setup() {
createCanvas(50, 50);
background(255);
let start, time;
start = performance.now();
noise_1();
time = performance.now() - start;
print("noise_1 : " + time);
start = performance.now();
noise_2();
time = performance.now() -start;
print("noise_2 : " + time);
}
// Your code
function noise_1() {
for (let y = 0; y < height; y++) {
for (let x = 0; x < width; x++) {
noiseVal = random(0,1);
stroke(noiseVal*255);
point(x,y);
}
}
}
// same with pixels array
function noise_2() {
loadPixels();
for (let i=0; i < pixels.length; i+=4){
noiseVal = random(0, 255);
pixels[i] = pixels[i+1] = pixels[i+2] = noiseVal;
}
updatePixels();
}
output :
noise_1 : 495.1
noise_2 : 5.92
To generate a single frame of static, you're going to have to iterate over each pixel. You could make your blocks larger than a single pixel, but that will only reduce the problem, not get rid of it completely.
Instead, you can probably get away with pre-computing a few images of static (let's say 10 or so). Save these as a file or to an off-screen buffer (the createGraphics() function is your friend), and then draw those images instead of drawing each pixel every frame.

calculating forward kinematics using D-H matrix

I have a 6-DOF robot arm model:
robot arm structure
I want to calculate forward kinematics, so I uses the D-H matrix. the D-H parameters are:
static const std::vector<float> theta = {
0,0,90.0f,0,-90.0f,0};
// d
static const std::vector<float> d = {
380.948f,0,0,-560.18f,0,0};
// a
static const std::vector<float> a = {
-220.0f,522.331f,80.0f,0,0,94.77f};
// alpha
static const std::vector<float> alpha = {
90.0f,0,90.0f,-90.0f,-90.0f,0};
and the calculation :
glm::mat4 Robothand::armForKinematics() noexcept
{
glm::mat4 pose(1.0f);
float cos_theta, sin_theta, cos_alpha, sin_alpha;
for (auto i = 0; i < 6;i++)
{
cos_theta = cosf(glm::radians(theta[i]));
sin_theta = sinf(glm::radians(theta[i]));
cos_alpha = cosf(glm::radians(alpha[i]));
sin_alpha = sinf(glm::radians(alpha[i]));
glm::mat4 Ai = {
cos_theta, -sin_theta * cos_alpha,sin_theta * sin_alpha, a[i] * cos_theta,
sin_theta, cos_theta * cos_alpha, -cos_theta * sin_alpha,a[i] * sin_theta,
0, sin_alpha, cos_alpha, d[i],
0, 0, 0, 1 };
pose = pose * Ai;
}
return pose;
}
the problem I have is that, I can't get the correct result, for example, I want to calculate the transformation matrix from first joint to the 4th joint, I will change the for loop i < 3,then I can get the pose matrix, and I can the origin coordinate in 4th coordinate system by pose * (0,0,0,1).but the result (380.948,382.331,0) seems not correct because it should be move along x-axis not y-axis. I have read many books and materials about D-H matrix, but I can't figure out what's wrong with it.
I have figured it out by myself, the real problem behind is glm::mat, glm::mat is col-type which means columns will be initialized before rows,I changed the code and get the correct result:
for (int i = 0; i < joint_num; ++i)
{
pose = glm::rotate(pose, glm::radians(degrees[i]), glm::vec3(0, 0, 1));
pose = glm::translate(pose,glm::vec3(0,0,d[i]));
pose = glm::translate(pose, glm::vec3(a[i], 0, 0));
pose = glm::rotate(pose,glm::radians(alpha[i]),glm::vec3(1,0,0));
}
then I can get the position by:
auto pos = pose * glm::vec4(x,y,z,1);

Bidirectional path tracing

I'm making a bidirectional path tracer and I have some troubles.
To be clear :
1) One point light
2) All objects are diffuse
3) All objects are spheres, even walls (they are very large)
4) NO MIS WEIGHTING
The light emission is a 3D vector. The BRDF of a sphere is a 3D vector. Hard coded.
In the main function below I generate EyePath and LightPath then I connect them. At least I try.
In this post I will talking about the main function then EyePath then LightPath. The talking about connecting function will appear once EyePath and Light are good.
First questions :
Does the generation of the first light point is good ?
Do I need to compute this point according to the emission of the light source? or is it just the emission ? The line is commented where i'm filling the Vertices structure.
Do I need to translate fromlight ? In order to put it on the sphere
The code below is sampled in the main function. Above it there is two for loops going through all pixels. Camera.o is the eye. CameraRayDir is the direction to the current pixel.
//The path light starting point is at the same position as the light
Ray fromLight(Vec(0, 24.3, 0), Vec());
Sphere light = spheres[7];
#define PDF 0.15915494309 // 1 / (2 * PI)
for(int i = 0; i < samps; ++i)
{
std::vector<Vertices> PathEye;
std::vector<Vertices> PathLight;
Vec cameraRayDir = cx * (double(x) / w - .5) + cy * (double(y) / h - .5) + camera.d;
Ray rayEye(camera.o, cameraRayDir.norm());
// Hemisphere oriented towards the top
fromLight.d = generateRayInHemisphere(fromLight.o,Vec(0,1,0)).d;
double f = clamp(n.dot(fromLight.d.norm()));
Vertices vert;
vert.d = fromLight.d;
vert.x = fromLight.o;
vert.id = 7;
vert.cos = f;
vert.n = Vec(0,1,0).norm();
// this one ?
//vert.couleur = spheres[7].e * f / PDF;
// Or this one ?
vert.couleur = spheres[7].e;
PathLight.push_back(vert);
int sizeEye = generateEyePath(PathEye, rayEye, maxDepth);
int sizeLight = generateLightPath(PathLight, fromLight, maxDepth);
for (int s = 0; s < sizeLight; ++s)
{
for (int t = 1; t < sizeEye; ++t)
{
int depth = t + s - 1;
if ((s == 0 && t == 0) || depth < 0 || depth > maxDepth)
continue;
pixelValue = pixelValue + connectPaths(PathEye, PathLight, s, t);
}
}
}
For the EyePath I intersect the geometry then I compute the illumination according to the distance with the light. The colour is black if the point is in the shadow.
Second question : For the eye path and the direct illumination, is the computation good ? I've seen in many code, people use the pdf even in direct illumination. But I'm only using point light and spheres.
int generateEyePath(std::vector<Vertices>& v, Ray eye, int maxDepth)
{
double t;
int id = 0;
Vertices vert;
int RussianRoulette;
while(v.size() <= maxDepth)
{
if(distribRREye(generatorRREye) < 10)
break;
// Intersect all the geometry
// id is the id of the intersected geometry in an array
intersect(eye, t, id);
const Sphere& obj = spheres[id];
// Intersection point
Vec x = eye.o + eye.d * t;
// normal
Vec n = (x - obj.p).norm();
Vec direction = light.p - x;
// Shadow ray
Ray RaytoLight = Ray(x, direction.norm());
const float distance = direction.length();
// shadow
const bool visibility = intersect(RaytoLight, t, id);
const Sphere &lumiere = spheres[id];
float degree = clamp(n.dot((lumiere.p - x).norm()));
// If the intersected geometry is not a light, then in shadow
if(lumiere.e.x == 0)
{
vert.couleur = Vec();
}
else // else we compute the colour
// obj.c is the brdf, lumiere.e is the emission
vert.couleur = (obj.c).mult(lumiere.e / (distance * distance)) * degree;
vert.x = x;
vert.id = id;
vert.n = n;
vert.d = eye.d.normn();
vert.cos = degree;
v.push_back(vert);
eye = generateRayInHemisphere(x,n);
}
return v.size();
}
For the LightPath, for a given point, I compute it according to the previous one and the values at this point. Like in a common path tracing.\n
Third question: Is the colour computation good ?
int generateLightPath(std::vector<Vertices>& v, Ray fromLight, int maxDepth)
{
double t;
int id = 0;
Vertices vert;
Vec previous;
while(v.size() <= maxDepth)
{
if(distribRRLight(generatorRRLight) < 10)
break;
previous = v.back().couleur;
intersect(fromLight, t, id);
// intersected geometry
const Sphere& obj = spheres[id];
// Intersection point
Vec x = fromLight.o + fromLight.d * t;
// normal
Vec n = (x - obj.p).norm();
double f = clamp(n.dot(fromLight.d.norm()));
// obj.c is the brdf
vert.couleur = previous.mult(((obj.c / M_PI) * f) / PDF);
vert.x = x;
vert.id = id;
vert.n = n;
vert.d = fromLight.d.norm();
vert.cos = f;
v.push_back(vert);
fromLight = generateRayInHemisphere(x,n);
}
return v.size();
}
For the moment I get this result.
enter image description here
The connecting function will come once EyePath and LightPath are good.
Thank you all
Try the spherical reference scene mentioned in this paper. I think then you can work out most of your questions by yourself since it has an analytical solution.
https://www.researchgate.net/publication/221546261_Testing_Monte-Carlo_Global_Illumination_Methods_with_Analytically_Computable_Scenes
It would save your time to implement and verify your understanding with path tracing and light tracing first, then try to combine them with weights.

elliptical fixture in box2d and cocos2d

I am trying to develop an iOS game in Cocos2d + Box2d. I want to use elliptical fixtures in Box2D. I tried using b2Capsule shape, but its not exactly what I want as the collision is not proper. Anyone has done this before?
For specific shapes in Box2D you will have to triangulate you original polygon (in your case an ellipse in which you keep a certain number of vertices).
For this, you can use the poly2tri excellent constrained Delaunay triangulation at http://code.google.com/p/poly2tri/
It is very simple. Here is the way I get my triangles :
- (NSArray*) triangulate:(NSArray*)verticesArray
{
NSMutableArray* outputTriangles = [[[NSMutableArray alloc] init] autorelease];
p2t::CDT* triangulationContainer;
vector<p2t::Triangle*> p2tTriangles;
vector< vector<p2t::Point*> > polylines;
vector<p2t::Point*> polyline;
for (hOzPoint2D *point in verticesArray) {
polyline.push_back(new p2t::Point([point x], [point y]));
}
polylines.push_back(polyline);
triangulationContainer = new p2t::CDT(polyline);
triangulationContainer->Triangulate();
p2tTriangles = triangulationContainer->GetTriangles();
for (int i = 0; i < p2tTriangles.size(); i++) {
p2t::Triangle& t = *p2tTriangles[i];
p2t::Point& a = *t.GetPoint(0);
p2t::Point& b = *t.GetPoint(1);
p2t::Point& c = *t.GetPoint(2);
[outputTriangles addObject:[NSArray arrayWithObjects:
[hOzPoint2D point2DWithDoubleX:a.x doubleY:a.y],
[hOzPoint2D point2DWithDoubleX:b.x doubleY:b.y],
[hOzPoint2D point2DWithDoubleX:c.x doubleY:c.y], nil]];
}
delete triangulationContainer;
for(int i = 0; i < polylines.size(); i++) {
vector<p2t::Point*> poly = polylines[i];
FreeClear(poly);
}
return [outputTriangles copy];
}
hOzPoint2D here is my custom point class, but you can pass any couple of coordinates. You don't even have to output a NSArray : you can insert this method in your body creation one.
Be careful that poly2tri has some restrictions :
you can't have twice the same point in your polygon ;
the polygon must not be self-intersecting ;
...
Read the poly2tri page to know more.
The resulting array contains triangles that you attach as fixtures to the same body.
I have used approximation as well. This has some performance drawbacks, but nothing major I guess. Code (Flash ActionScript 3, but you should be able to port that easily):
var vertices:Vector.<b2Vec2> = new Vector.<b2Vec2>();
var a:Number = _image.width / 2 / PhysicsVals.RATIO;
var b:Number = _image.height / 2 / PhysicsVals.RATIO;
var segments:int = ellipse_approximation_vertices_count; (the more the more precise shape is, but the more time it takes to do collision detection)
var segment:Number = 2 * Math.PI / segments;
for (var i:int = 0; i < segments; i++)
{
vertices.push(new b2Vec2(a * Math.cos(segment * i), b * Math.sin(segment * i)));
}
var shape:b2PolygonShape = new b2PolygonShape();
shape.SetAsVector(vertices, vertices.length);
var fixtureDef:b2FixtureDef = new b2FixtureDef();
fixtureDef.shape = shape;