Core Plot Gallery real time plot design criteria - objective-c

I few curiosities from the RealTimePlot.m of the CorePlotGallery sample real time plot setup:
// Plot space
CPTXYPlotSpace * plotSpace = (CPTXYPlotSpace *)graph.defaultPlotSpace;
plotSpace.xRange = [CPTPlotRange plotRangeWithLocation:#0.0 length:#(kMaxDataPoints - 2)];
plotSpace.yRange = [CPTPlotRange plotRangeWithLocation:#0.0 length:#1.0];
plotSpace.allowsUserInteraction = YES;
It notes a range of points - kMaxDataPoints, initially 52, which appears to be the visible range of plot points from the initial window/view size.
The delegate newData method trims the earliest point to be added, to maintain this queue but my question is how was this value (52) derived?
Is it possible to calc at run-time this visible range even when the user pinches / zooms?
Wouldn't it better to trim the point(s) afterwards - after adding, when the quantity is known of points added, from the range beginning?

It's a "magic number" derived by saying "that's looks good" rather than any empirical method. Of course you can calculate it based on the size of the plot area. Using a constant is just a shortcut. Because of the design of the app, we know that the graph won't change size on iOS, so it's a reasonable shortcut to make there.
I don't understand the last part of the question.

Related

How to use a shaderModifier to alter the color of specific triangles in a SCNGeometry

First, before I go on, I have read through: SceneKit painting on texture with texture coordinates which seems to suggest I'm on the right track.
I have a complex SCNGeometry representing a hexasphere. It's rendering really well, and with a full 60fps on all of my test devices.
At the moment, all of the hexagons are being rendered with a single material, because, as I understand it, every SCNMaterial I add to my geometry adds another draw call, which I can't afford.
Ultimately, I want to be able to color each of the almost 10,000 hexagons individually, so adding another material for each one is not going to work.
I had been planning to limit the color range to (say) 100 colors, and then move hexagons between different geometries, each with their own colored material, but that won't work because SCNGeometry says it works with an immutable set of vertices.
So, my current thought/plan is to use a shader modifier as suggested by #rickster in the above-mentioned question to somehow modify the color of individual hexagons (or sets of 4 triangles).
The thing is, I sort of understand the Apple doco referred to, but I don't understand how to provide the shader with what I think must essentially be an array of colour information, somehow indexed so that the shader knows which triangles to give what colors.
The code I have now, that creates the geometry reads as:
NSData *indiceData = [NSData dataWithBytes:oneMeshIndices length:sizeof(UInt32) * indiceIndex];
SCNGeometryElement *oneMeshElement =
[SCNGeometryElement geometryElementWithData:indiceData
primitiveType:SCNGeometryPrimitiveTypeTriangles
primitiveCount:indiceIndex / 3
bytesPerIndex:sizeof(UInt32)];
[oneMeshElements addObject:oneMeshElement];
SCNGeometrySource *oneMeshNormalSource =
[SCNGeometrySource geometrySourceWithNormals:oneMeshNormals count:normalIndex];
SCNGeometrySource *oneMeshVerticeSource =
[SCNGeometrySource geometrySourceWithVertices:oneMeshVertices count:vertexIndex];
SCNGeometry *oneMeshGeom =
[SCNGeometry geometryWithSources:[NSArray arrayWithObjects:oneMeshVerticeSource, oneMeshNormalSource, nil]
elements:oneMeshElements];
SCNMaterial *mat1 = [SCNMaterial material];
mat1.diffuse.contents = [UIColor greenColor];
oneMeshGeom.materials = #[mat1];
SCNNode *node = [SCNNode nodeWithGeometry:oneMeshGeom];
If someone can shed some light on how to provide the shader with a way to color each triangle indexed by the indices in indiceData, that would be fantastic.
EDIT
I've tried looking at providing the shader with a texture as a container for color information that would be indexed by the VertexID however it seems that SceneKit doesn't make the VertexID available. My thought was to provide this texture (actually just an array of bytes, 1 per hexagon on the hexasphere), via the SCNMaterialProperty class and then, in the shader, pull out the appropriate byte, based on the vertex number. That byte would be used to index an array of fixed colors and the resultant color for each vertex would then give the desired result.
Without a VertexID, this idea won't work, unless there is some other, similarly useful piece of data...
EDIT 2
Perhaps I am stubborn. I've been trying to get this to work, and as an experiment I created an image that is basically a striped rainbow and wrote the following shader, thinking it would basically colour my sphere with the rainbow.
It doesn't work. The entire sphere is drawn using the colour in the top left corner of the image.
My shaderModifer code is:
#pragma arguments
sampler2D colorMap;
uniform sampler2D colorMap;
#pragma body
vec4 color = texture2D(colorMap, _surface.diffuseTexcoord);
_surface.diffuse.rgba = color;
and I apply this using the code:
SCNMaterial *mat1 = [SCNMaterial material];
mat1.locksAmbientWithDiffuse = YES;
mat1.doubleSided = YES;
mat1.shaderModifiers = #{SCNShaderModifierEntryPointSurface :
#"#pragma arguments\nsampler2D colorMap;\nuniform sampler2D colorMap;\n#pragma body\nvec4 color = texture2D(colorMap, _surface.diffuseTexcoord);\n_surface.diffuse.rgba = color;"};
colorMap = [SCNMaterialProperty materialPropertyWithContents:[UIImage imageNamed:#"rainbow.png"]];
[mat1 setValue:colorMap forKeyPath:#"colorMap"];
I had thought that the _surface.diffuseTexcoord would be appropriate but I'm beginning to think I need to somehow map that to a coordinate in the image by knowing the dimensions of the image and interpolating somehow.
But if this is the case, what units are _surface.diffuseTexcoord in? How do I know the min/max range of this so that I can map it to the image?
Once again, I'm hoping someone can steer me in the right direction if these attempts are wrong.
EDIT 3
OK, so I know I'm on the right track now. I've realised that by using _surface.normal instead of _surface.diffuseTexcoord I can use that as a latitude/longitude on my sphere to map to an x,y in the image and I now see the hexagons being colored based on the color in the colorMap however it doesn't matter what I do (so far); the normal angles seem to be fixed in relation to the camera position, so when I move the camera to look at a different point of the sphere, the colorMap doesn't rotate with it.
Here is the latest shader code:
#pragma arguments
sampler2D colorMap;
uniform sampler2D colorMap;
#pragma body
float x = ((_surface.normal.x * 57.29577951) + 180.0) / 360.0;
float y = 1.0 - ((_surface.normal.y * 57.29577951) + 90.0) / 180.0;
vec4 color = texture2D(colorMap, vec2(x, y));
_output.color.rgba = color;
ANSWER
So I solved the problem. It turned out that there was no need for a shader to achieve my desired results.
The answer was to use a mappingChannel to provide the geometry with a set of texture coordinates for each vertex. These texture coordinates are used to pull color data from the appropriate texture (it all depends on how you set up your material).
So, whilst I did manage to get a shader working, there were performance issues on older devices, and using a mappingChannel was much much better, working at 60fps on all devices now.
I did find though that despite the documentation saying that a mapping channel is a series of CGPoint objects, that wouldn't work on 64 bit devices because CGPoint seems to use doubles instead of floats.
I needed to define my own struct:
typedef struct {
float x;
float y;
} MyPoint;
MyPoint oneMeshTextureCoordinates[vertexCount];
and then having built up an array of these, one for each vertex, I then created the mappingChannel source as follows:
SCNGeometrySource *textureMappingSource =
[SCNGeometrySource geometrySourceWithData:
[NSData dataWithBytes:oneMeshTextureCoordinates
length:sizeof(MyPoint) * vertexCount]
semantic:SCNGeometrySourceSemanticTexcoord
vertexCount
floatComponents:YES
componentsPerVector:2
bytesPerComponent:sizeof(float)
dataOffset:0
dataStride:sizeof(MyPoint)];
EDIT:
In response to a request, here is a project that demonstrates how I use this. https://github.com/pkclsoft/HexasphereDemo

applyForce(0, 400) - SpriteKit inconsistency

So I have an object that has a physicsBody and gravity affects it. It is also dynamic.
Currently, when the users touches the screen, I run the code:
applyForce(0, 400)
The object moves up about 200 and then falls back down due to gravity. This only happens some of the time. Other times, it results in the object only moving 50ish units in the Y direction.
I can't find a pattern... I put my project on dropbox so it can be opened if anyone is willing to look at it.
https://www.dropbox.com/sh/z0nt79pd0l5psfg/bJTbaS2JpY
EDIT: It seems this happens when the player is bouncing off of the ground slightly for a moment after impact. Is there a way I can make it so the player doesn't bounce at all?
EDIT 2: I tried to solve this using the friction parameter and only allowing the player to "jump" when the friction was = 0 (you would think this would be all cases where the player was airborne) but friction appears to be greater than 0 at all times. How else might I detect if the player is touching an object (other than by using the y location)?
Thanks
Suggested Solution
If you're trying to implement a jump feature, I suggest you look at applyImpulse instead of applyForce. Here's the difference between the two, as described in the Sprite Kit Programming Guide:
You can choose to apply either a force or an impulse:
A force is applied for a length of time based on the amount of simulation time that passes between when you apply the force and when the next frame of the simulation is processed. So, to apply a continuous force to an body, you need to make the appropriate method calls each time a new frame is processed. Forces are usually used for continuous effects.
An impulse makes an instantaneous change to the body’s velocity that is independent of the amount of simulation time that has passed. Impulses are usually used for immediate changes to a body’s velocity.
A jump is really an instantaneous change to a body's velocity, meaning that you should apply an impulse instead of a force. To use the applyImpulse: method, figure out the desired instantaneous change in velocity, multiply by the body's mass, and use that as the impulse parameter into the function. I think you'll see better results.
Explanation for Unexpected Behavior
If you're calling applyForce: outside of your update: function, what's happening is that your force is being multiplied by the amount of time passed between when you apply the force and when the next frame of the simulation is processed. This multiplier is not a constant, so you're seeing a different change in velocity every time you call applyForce: in this manner.
#godel9 has a good suggested solution, although, in my own testing, the explanation given for the unexpected behaviour is not correct.
From the SKPhysicsBody Class Reference:
The force is applied for a single simulation step (one frame).
Referring back to the SKScene Class Reference's section on the -update method:
...it is called exactly once per frame, so long as the scene is presented in a view and is not paused.
So we can assume that calling -applyForce: in SKScene's -update method should not cause a problem. But as observed, the force does not exceed gravity, despite applying an upward force much greater than gravity (400 newtons vs 9.81).
I created a test project that would create two nodes, one that falls naturally, setting affectedByGravity to TRUE, and another that calls -applyForce with the same expected gravity vector (0 newtons in the x direction, and -9.81 in the y direction). I then calculated the difference in velocity of each node in one time step, and the length of time step. From this, I then logged the acceleration (change in velocity / change in time).
Here is a snippet from my SKScene subclass:
- (id)initWithSize:(CGSize)size
{
if (self = [super initWithSize:size])
{
self.backgroundColor = [UIColor purpleColor];
SKShapeNode *node = [[SKShapeNode alloc] init];
node.path = CGPathCreateWithEllipseInRect(CGRectMake(0, 0, 10, 10), nil);
node.name = #"n";
node.physicsBody = [SKPhysicsBody bodyWithCircleOfRadius:5];
node.position = CGPointMake(0, 450);
node.physicsBody.linearDamping = 0;
node.physicsBody.affectedByGravity = NO;
[self addChild:node];
node = [[SKShapeNode alloc] init];
node.path = CGPathCreateWithEllipseInRect(CGRectMake(0, 0, 10, 10), nil);
node.name = #"n2";
node.physicsBody = [SKPhysicsBody bodyWithCircleOfRadius:5];
node.position = CGPointMake(20, 450);
node.physicsBody.linearDamping = 0;
[self addChild:node];
}
return self;
}
- (void)update:(NSTimeInterval)currentTime
{
SKNode *node = [self childNodeWithName:#"n"];
SKNode *node2 = [self childNodeWithName:#"n2"];
CGFloat acc1 = (node.physicsBody.velocity.dy - self.previousVelocity) / (currentTime - self.previousTime);
CGFloat acc2 = (node2.physicsBody.velocity.dy - self.previousVelocity2) / (currentTime - self.previousTime);
[node2.physicsBody applyForce:CGVectorMake(0, node.physicsBody.mass * -150 * self.physicsWorld.gravity.dy)];
NSLog(#"x:%f, y:%f, acc1:%f, acc2:%f", node.position.x, node.position.y, acc1, acc2);
self.previousVelocity = node.physicsBody.velocity.dy;
self.previousTime = currentTime;
self.previousVelocity2 = node2.physicsBody.velocity.dy;
}
The results are unusual. The node that is affected by gravity in the simulation has an acceleration that is consistently multiplied by a factor of 150 when compared to the node whose force was manually applied. I have attempted this with nodes of varying size and density, but the same scalar multiplier exists.
From this I must deduce that SpriteKit internally has a default 'pixel-to-meter' ratio. That is to say that each 'meter' is equal to exactly 150 pixels. This is sometimes useful, as otherwise the scene is often too large, meaning forces react slowly (think watching an airplane from the ground, it is travelling very fast but seemingly moving very slowly).
Sprite Kit documentation frequently suggests that exact physics calculations are not recommended (seen specifically in the section 'Fudging the Numbers'), but this inconsistency took me a long time to pin down. Hope this helps!

core plot datasource - issue

I have a question for optimize a core plot graph, if I want to plot the function y=8*sin(x) I use a parse and I get the value of a range (for example -5,+5), after calculate it I plot the graph.
If I drag up or down the plot some value are covered, so they are unnecessary and I can remove it, after this add some point on visible range for have a better line.
Now I have a datasource of more interval, 3 array with the y value of this interval: -5,-2 one of 0,3 and one 4,5 (this number are for example). How can I plot this line on my plot View, I need to add some code like this:
CPTScatterPlot *xSquaredPlot = [[CPTScatterPlot alloc] initWithFrame:graph.defaultPlotSpace.accessibilityFrame];
xSquaredPlot.identifier = #"Grafico";
xSquaredPlot.interpolation = CPTScatterPlotInterpolationLinear;
xSquaredPlot.delegate = self;
CPTMutableLineStyle *lineStyleFunc = [CPTMutableLineStyle lineStyle];
lineStyleFunc.lineWidth = 1.0f;
lineStyleFunc.lineColor = [CPTColor redColor];
xSquaredPlot.dataLineStyle = lineStyleFunc;
xSquaredPlot.dataSource = self;
[graph addPlot:xSquaredPlot];
but the problem i that I don't know how line I have, I need to create it dynamically, how can I do it? adding this code when I create the arrays of new interval? but when I need to update datasource?
Core Plot will skip drawing points that fall outside the visible plot area when it can, so you don't have to worry too much about doing that in your datasource. You don't want to be adding and removing a lot of data points as the user scrolls around—that will just cause more work for the plotting code and slow it down.
Since you are plotting a function, one thing you can do is only generate data points in a fairly small range, say just slightly outside the visible x-range. Use a plot space delegate to monitor changes and add points as needed when the user scrolls or zooms the graph.
Use the -insertDataAtIndex:numberOfRecords: method to add data points to the plot. This will have better performance than -reloadData which forces the plot to load all of its data, not just the new values.

Coreplot editing plotspace

I came up with a small problem. Now, I've got a program which plots
graphs. For that I've set up few functions.
First when loaded the graph gets initialized with the plotspace
etc. Then when the user clicks a button, a new plot gets added to the
graph. But with that I have the necessity to change the
plotSpace.xRange and plotsPace.yRange. How can I do so after having
initialized the graph already?
Thanks for your thoughts!
You can change the plot space ranges at any point, not just on creation of the graph. Once you do so, the graph should adjust the displayed axis ranges onscreen. I don't believe you even need to reload the data for a given graph after this.
As an example, the following code should adjust the X range of a plot to be from 0 to 100:
CPTXYPlotSpace *plotSpace = (CPTXYPlotSpace *)graph.defaultPlotSpace;
[plotSpace setXRange:[CPTPlotRange plotRangeWithLocation:CPTDecimalFromInteger(0) length:CPTDecimalFromInteger(100)]];
where graph is a CPTXYGraph instance, in this case.

Using CPTAnnotation in CorePlot DatePlot (iOS)

I am Using CorePlot in my app, and I want to display a annotation over the plotSymbol. I haven't found any code in the sample projects of the latest 0.9 version of CorePlot. After some research i have come to this point:
- (void)scatterPlot:(CPTScatterPlot *)plot plotSymbolWasSelectedAtRecordIndex:(NSUInteger)index
{
CPTLayerAnnotation *annot = [[CPTLayerAnnotation alloc]initWithAnchorLayer:graph];
CPTBorderedLayer * logoLayer = [[(CPTBorderedLayer *) [CPTBorderedLayer alloc] initWithFrame:CGRectMake(10,10,100,50)] autorelease];
CPTFill *fillImage = [CPTFill fillWithImage:[CPTImage imageForPNGFile:#"whatEver!"]];
logoLayer.fill = fillImage;
annot.contentLayer = logoLayer;
annot.rectAnchor=CPTRectAnchorTop;
[graph addAnnotation:annot];
}
But its obviously not working.... Can anybody help me?
My goal is to get an annotation over the selected plot symbol, similar to annotations in MKMapView.
Update
It is a DatePlot, just to clarify things and it is working with time intervals since 2001 on the x-axis.
There are several examples of this in the Core Plot example apps. The gradient scatter plot in the Plot Gallery app (and several other apps as well) use this method to attach a text label to the selected point. The point selection demo in the Mac version of CPTTestApp uses a second scatter plot to draw a crosshairs over the selected point.
Remember to set the plotSymbolMarginForHitDetection property on the scatter plot, too. The default is 0, which means you have to hit the center of the point exactly to register a touch.
There are two types of annotation in Core Plot. A CPTLayerAnnotation is anchored to a given Core Animation layer (the graph in your case). A CPTPlotSpaceAnnotation is anchored to a plot space coordinate (== data coordinate). Your comment below makes it sound like you want to use a plot space annotation instead of a layer annotation.