GraphKit framework in Cocoa - objective-c

I try to draw an XY graph using GraphKit.
Information of this framework is very limited on the internet...
Here's what I did:
// a xychart is predefined in header as GRChart
GRDateSet *dataset = [[GRXYDataSet alloc] initWithOwnerChart:xychart];
[xychart addDataSet:dataSet loadData:YES];
[xychart reloaddata];
also I implement delegate methods:
(double)chart:(GRChartView *)aChart xValueForDataSet:(GFDataSet*)aDataSet element:(NSUInteger)index
{ return index * 10.0; }
(double)chart:(GRChartView *)aChart yValueForDataSet:(GFDataSet*)aDataSet element:(NSUInteger)index
{ return index * 10.0; }
(NSUInteger) chart:(GRChartView *)aChart numberOfElementsForDataSet:(GFDataSet*)aDataSet {
return 10;
}
however, it only draws the axes but no data points at all...
what did I miss here?
thanks!

I got it. This framework only stores data points and draws axes according to the data points. (It automatically calculates the bounds of each axes and zoom into a suitable plot area.)
However, no drawing method is rooted. To get an immediate graph, I have to use GRAreaDataSet, which is a subclass of GRXYDataSet. Then it will draw an area chart.
I also tried out core-plot. But it's more difficult to use to me. I have to calculate the bounds myself; and padding the graph to show the label values of axes. Also, it's not so beautiful if I don't customize the symbols and lines. However, the default GraphKit charting is nice-looking enough. Though it doesn't have a document...
I'll try to write a tutorial of it when I try out everything in it :)

Related

How to make large 2d tilemap easier to load in Unity

I am creating a small game in the Unity game engine, and the map for the game is generated from a 2d tilemap. The tilemap contains so many tiles, though, is is very hard for a device like a phone to render them all, so the frame rate drops. The map is completely static in that the only moving thing in the game is a main character sprite and the camera following it. The map itself has no moving objects, it is very simple, there must be a way to render only the needed sections of it or perhaps just render the map in once. All I have discovered from researching the topic is that perhaps a good way to do it is buy using the Unity mesh class to turn the tilemap into a mesh. I could not figure out how to do this with a 2d tilemap, and I could not see how it would benefit the render time anyways, but if anyone could point me in the right direction for rendering large 2d tilemaps that would be fantastic. Thanks.
Tile system:
To make the tile map work I put every individual tile as a prefab in my prefab folder, with the attributes changed for 2d box colliders and scaled size. I attribute each individual prefab of the tile to a certain color on the RGB scale, and then import a png file that has the corresponding colors of the prefabs where I want them like this:
I then wrote a script which will place each prefab where its associated color is. It would look like this for one tile:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Map : MonoBehaviour {
private int levelWidth;
private int levelHeight;
public Transform block13;
private Color[] tileColors;
public Color block13Color;
public Texture2D levelTexture;
public PlayerMobility playerMobility;
// Use this for initialization
void Start () {
levelWidth = levelTexture.width;
levelHeight = levelTexture.height;
loadLevel ();
}
// Update is called once per frame
void Update () {
}
void loadLevel(){
tileColors = new Color[levelWidth * levelHeight];
tileColors = levelTexture.GetPixels ();
for (int y = 0; y < levelHeight; y++) {
for (int x = 0; x < levelWidth; x++) {
// if (tileColors [x + y * levelWidth] == block13Color) {
// Instantiate(block13, new Vector3(x, y), Quaternion.identity);
// }
//
}
}
}
}
This results in a map that looks like this when used with all the code (I took out all the code for the other prefabs to save space)
You can instantiate tiles that are in range of the camera and destroy tiles that are not. There are several ways to do this. But first make sure that what's consuming your resources is in fact the large number of tiles, not something else.
One way is to create an empty parent gameObject to every tile (right click in "Hierarchy" > Create Empty"
then attach a script to this parent. This script has a reference to the camera (tell me if you need help with that) and calculates the distance between it and the camera and instantiates the tile if the distance is less than a value, otherwise destroys the instance (if it's there).
It has to do this in the Update function to check for the distances every frame, or you can use "Coroutines" to do less checks (more efficient).
Another way is to attach a script to the camera that has an array with instances of all tiles and checks on their distances from the camera the same way. You can do this if you only have exactly one large tilemap because it would be hard to re-use this script if you have more than a large tilemap.
Also you can calculate the distance between the tile and the character sprite instead of the camera. Pick whichever is more convenient.
After doing the above and you still get frame-drops you can zoom-in the camera to include less tiles in its range but you'd have to recalculate the distances then.

THREE.js rotating camera around an object using orbit path

I am struggling in solving this problem.
On my scene, I have a camera which looks at the center of mass of an object. I have a some buttons that enable to set camera position on particular view (front view, back view,...) along a invisible sphere that surroung the object (constant radius).
When I click on the button, i would like the camera to move from its start position to the end position along the sphere surface. When camera moves I would like it to keep fixing center of mass of the object.
Has anyone have a clue on how to achieve this?
Thanks for help!
If you are happy/prefer to use basic trigonometry then in your initialisation section you could do this:
var cameraAngle = 0;
var orbitRange = 100;
var orbitSpeed = 2 * Math.PI/180;
var desiredAngle = 90 * Math.PI/180;
...
camera.position.set(orbitRange,0,0);
camera.lookAt(myObject.position);
Then in your render/animate section you could do this:
if (cameraAngle == desiredAngle) { orbitSpeed = 0; }
else {
cameraAngle += orbitSpeed;
camera.position.x = Math.cos(cameraAngle) * orbitRange;
camera.position.y = Math.sin(cameraAngle) * orbitRange;
}
Of course, your buttons would modify what the desiredAngle was (0°, 90°, 180° or 270° presumably), you need to rotate around the correct plane (I am rotating around the XY plane above), and you can play with the orbitRange and orbitSpeed until you hare happy.
You can also modify orbitSpeed as it moves along the orbit path, speeding up and slowing down at various cameraAngles for a smoother ride. This process is called 'tweening' and you could search on 'tween' or 'tweening' if you want to know more. I think Three.js has tweening support but have never looked into it.
Oh, also remember to set your camera's far property to be greater than orbitRadius or you will only see the front half of your object and, depending on what it is, that might look weird.

Particles inside a moving Box2d world are getting drawn on top instead of inside a layer

I'm using LevelHelper to build my level, and I'm adding some particles (dynamically initialized CCParticleSystemQuad's) inside my level. All works fine until I move the world (it's a dynamically drawn world in Box2D in which I follow the player with the camera). If I move the world, newly added particles,which are emitting continuously, are drawn at the right position but in the particle-animation afterwards the particles seem to be drawn relatively of the global world/screen position. This gives a weird 'trippy' effect which looks totally unrealistic. The particles should be redrawn/refreshed inside the world
LevelHelperLoader * lh = gameLayer.lh;
LHLayer * layer = [lh layerWithUniqueName:#"MAIN_LAYER"];
NSArray * array = [lh spritesWithTag:WORTEL];
CCParticleSystemQuad * particle;
CGPoint position;
for (LHSprite * sprite in array) {
particle = [CCParticleSystemQuad particleWithFile:#"DirtParticles.plist"];
[layer addChild:particle z:0];
position = sprite.position;
position.y += sprite.contentSize.height * 0.5f;
[particle setPosition:position];
[particle resetSystem];
}
Does anybody know what I might be doing wrong?
Try changing the particle position type:
particle.positionType = kCCPositionTypeFree;
The alternatives are kCCPositionTypeRelative and kCCPositionTypeGrouped. You may have to try all to see which of them best fits your scenario, I'm guessing it's either "free" or "relative".

Core graphics drag over pixels instead of points

Ok, I haven't found it anywhere. What should I do if i want to draw with core graphics per pixel? Like… I want to draw a line to pixels (45,61) and than (46,63) instead of drawing to point (23,31) or something like that. So what should I do in this case?
Should I use something like:
CGContextAddLineToPoint(context,22.5,30.5);
CGContextAddLineToPoint(context,23,31.5);
Or there is some better way?
I know about contentScaleFactor but should I use it as (when plotting some function for example):
for(int x=bounds.origin.x; x<=bounds.origin.x+bounds.size.width*[self contentScaleFactor]; i++)
CGContextAddLineToPoint(context,x/[self contentScaleFactor],y(x/[self contentScaleFactor]));
I know that the example code is not superb, but I think you'll get the idea.
I'll be vey thankful for help because I'm a bit confused with all this scale factor stuff.
Sounds like you are doing the Assignment3 from the iOS uTunes Stanford Course? :)
I think you are on the right track, as my implementation looks very similar:
for(int x=self.bounds.origin.x; x<=self.bounds.origin.x + (self.bounds.size.width * self.contentScaleFactor); x++) {
// get the scaled X Value to ask dataSource
CGFloat axeX = (x/self.contentScaleFactor - self.origin.x) / self.scale;
// using axeX here for testing, which draws a x=y graph
// replace axeX with the dataSource Y-Point calculation
CGFloat axeY = -1 * ((axeX * self.scale) + self.origin.y);
if (x == self.bounds.origin.x) {
CGContextMoveToPoint(context, x/self.contentScaleFactor, axeY);
} else {
CGContextAddLineToPoint(context, x/self.contentScaleFactor, axeY);
}
}
Tested on iOS-Sim iPhone4 (contentScaleFactor 1.0) + iPhone4S Device (contentScaleFactor 2.0).
Would be happy about possible improvements from other readers, because I am still learning.

Zooming in an NSView

I have an NSView in which the user can draw circles. These circles are stored as an array of NSBezierPaths, and in drawRect:, I loop through the array and call -stroke on each of the paths. How do I add a button to zoom in and out the NSView? Just change the bounds of the view?
Thanks.
Send your view a scaleUnitSquareToSize: message.
You might also find this informative:
https://developer.apple.com/library/content/qa/qa1346/_index.html
The code in that document lets you add a "scale" property to a view.
The above answers didn't work for my scenario but led me to a solution.
The updated link to #Peter's answer was helpful: scaleUnitSquareToSize
I have found two soultions for zooming:
Cropping the bounds manually
Scalling the bounds with scaleUnitSquareToSize
I have created a small test project. Both solutions can be found on my GitHub repo : BoundsAndFramesCroppingAndScalling
To understand bounds vs frames read this SO article: difference-between-the-frame-and-the-bounds.
Swift scalling code:
let scaleSize = NSSize(width: 0.5, height: 0.5)
// 0.5 - Half the size
// 1.0 - No calling
// 2.0 - double the size , ... etc
myView?.scaleUnitSquare(to: scaleSize)
myView?.needsDisplay = true