I'm developing a game that is using procedural terrain generation based on chunks using Unity. My terrain has to be splatmapped in runtime, so I've developed an algorithm to do that.
I use "chunk" prefab, that is instantiated every time the world generator decides to create new fragment. Each chunk has a terrain component as well as my script to perform splatmapping (and height generation in the future). The problem is, that when I instantiate the prefab, the prefab is still using same TerrainData object containing heights and splatmaps, so every change in one chunk also influences others.
I've found that I can instantiate terrainData from the prefab to clone it and it solved half of the problem. Now I can change heightmaps independently, but the splatmaps seem still connected.
void Start()
{
map = GetComponentInParent<MapGenerator>();
terrain = GetComponent<Terrain>();
//Copy terrain data, solves heightmap problems
terrain.terrainData = GameObject.Instantiate(terrain.terrainData);
//Try to generate new splatmaps
for (int i = 0; i < terrain.terrainData.alphamapTextures.Length; i++)
{
//Debug before changing
File.WriteAllBytes(Application.dataPath + "/../SPLAT_" + i + ".png", terrain.terrainData.alphamapTextures[i].EncodeToPNG());
//Try to change
terrain.terrainData.alphamapTextures[i] = new Texture2D(terrain.terrainData.alphamapHeight, terrain.terrainData.alphamapWidth);
//Debug after changing
File.WriteAllBytes(Application.dataPath + "/../SPLAT_x" + i + ".png", terrain.terrainData.alphamapTextures[i].EncodeToPNG());
}
//Calculate chunk offset
offset = new Vector2(transform.localPosition.x * (terrain.terrainData.alphamapHeight / terrain.terrainData.size.x),
transform.localPosition.z * (terrain.terrainData.alphamapWidth / terrain.terrainData.size.z));
//Splatting usually happens here
//SplatMap();
}
Unfortunately, this piece of code doesn't work. The alphamapTextures is a readonly array and changing its elements seems to do nothing (I get same output files in both debug .pngs)
I know I can use reflection and force the alphamapTextures reallocation, but I hope that there is a better way to do this. If not, thats unity desing flaw or a bug.
Thank you for any replies.
Related
I'm trying to implement a feature similar to HTC Vive's controller with the leap motion on my Unity project. I wanted to generate a laser pointer from the index finger and teleport the Vive's room on the position of the laser (as it's done with the controller). The problem is the latest leap motion (orion) documentation, it's unclear. Any ideas how to do that? More in general, we thought about using HandController but we don't understand where to add the script component.
Thanks!
It's unclear to me whether the problem you're having is getting hand data in your scene at all, or using that hand data.
If you're just trying to get hand data in your scene, you can copy a prefab from one of the Unity SDK's example scenes. If you're trying to integrate Leap into an existing scene that already has a VR rig set up, check out the documentation on the core Leap components to understand what pieces need to be in place for you to start getting Hand data. LeapServiceProvider has to be somewhere in your scene to receive hand data.
As long as you have a LeapServiceProvider somewhere, you can access hands from the Leap Motion from any script, anywhere. So for getting a ray from the index fingertip, just pop this script any old place:
using Leap;
using Leap.Unity;
using UnityEngine;
public class IndexRay : MonoBehaviour {
void Update() {
Hand rightHand = Hands.Right;
Vector3 indexTipPosition = rightHand.Fingers[1].TipPosition.ToVector3();
Vector3 indexTipDirection = rightHand.Fingers[1].bones[3].Direction.ToVector3();
// You can try using other bones in the index finger for direction as well;
// bones[3] is the last bone; bones[1] is the bone extending from the knuckle;
// bones[0] is the index metacarpal bone.
Debug.DrawRay(indexTipPosition, indexTipDirection, Color.cyan);
}
}
For what it's worth, the index fingertip direction is probably not going to be stable enough to do what you want. A more reliable strategy is to cast a line from the camera (or a theoretical "shoulder position" at a constant offset from the camera) through the index knuckle bone of the hand:
using Leap;
using Leap.Unity;
using UnityEngine;
public class ProjectiveRay : MonoBehaviour {
// To find an approximate shoulder, let's try 12 cm right, 15 cm down, and 4 cm back relative to the camera.
[Tooltip("An approximation for the shoulder position relative to the VR camera in the camera's (non-scaled) local space.")]
public Vector3 cameraShoulderOffset = new Vector3(0.12F, -0.15F, -0.04F);
public Transform shoulderTransform;
void Update() {
Hand rightHand = Hands.Right;
Vector3 cameraPosition = Camera.main.transform.position;
Vector3 shoulderPosition = cameraPosition + Camera.main.transform.rotation * cameraShoulderOffset;
Vector3 indexKnucklePosition = rightHand.Fingers[1].bones[1].PrevJoint.ToVector3();
Vector3 dirFromShoulder = (indexKnucklePosition - shoulderPosition).normalized;
Debug.DrawRay(indexKnucklePosition, dirFromShoulder, Color.white);
Debug.DrawLine(shoulderPosition, indexKnucklePosition, Color.red);
}
}
I am using player.setVelocity(player.getLocation().getDirection().multiply(Main.instance.getConfig().getDouble("velocity_multiplier")).setY(Main.instance.getConfig().getInt("Y_axis"))); to set velocity to a player. It allows high configuration of movement via config, but the problem is that when you set it too high, Spigot blocks it. I do not want to enable:
server.properties: allow_flight.
So how can I avoid this? I bumped up the multiplier to 30 just for a test, and it would start to move you, glitch, and pull you back down. It also says that the player moved too quickly in console even from smaller amounts of velocity. I was thinking of making it gradually apply the velocity. When you jump, it applies the starting velocity and as you go it goes higher(Y_axis) and farther(velocity_multiplier), but I do not know how to do that.
You can enable just for the player before applying the velocity and in a delayed task disabled it
public void blabla(Player player){
player.setAllowFlight(true);
player.setVelocity(player.getLocation().getDirection().multiply(Main.instance.getConfig().getDouble("velocity_multiplier")).setY(Main.instance.getConfig().getInt("Y_axis")));
new BukkitRunnable() {
#Override
public void run() {
player.setAllowFlight(false);
}
}.runTaskLater(this, 20 * 5);
}
In the code I used 20 * 5 to disable the flight after 5 seconds, you can change it to what you want.
Beyond code, you likely would be best situated to address your issue by allowing flight in the Spigot file and installing or developing an anti-cheat in the game. Spigot's flight protection works poorly with many plugins and does not successfully block many players who attempt to fly.
Best advice would be to look beyond a makeshift code solution and rather create your own anti-fly.
The maximum velocity in bukkit (and spigot) is 10 blocks per tick. This is all directions combined.
If your initial velocity is to high, you can use the scheduler to repeatedly calculate the next velocity.
To calculate this, we need some magic values: The following values come from The Minecraft Wiki.
private final static double DECELERATION_RATE = 0.98D;
private final static double GRAVITY_CONSTANT = 0.08D;
private final static double VANILA_ANTICHEAT_THRESHOLD = 9.5D; // actual 10D
We first need to calculate the spot the player would reach using those speeds, and then teleport him while applying the velocity for the first part.
We are going to use a BukkitRunnable to run a task that calculates the above:
Vector speed = ...;
Player player = ...;
new BukkitRunnable() {
double velY = speed.getY();
Location locCached = new Location(null,0,0,0);
#Override
public void run() {
if (velY > VANILA_ANTICHEAT_THRESHOLD) {
player.getLocation(locCached).setY(locCached.getY() + velY);
player.teleport(locCached);
player.setVelocity(new Vector(0,ANILA_ANTICHEAT_THRESHOLD,0));
} else {
player.setVelocity(new Vector(0,velY,0));
this.cancel();
}
velY -= GRAVITY_CONSTANT;
velY *= DECELERATION_RATE;
}
}.runTaskTimer(plugin,0,1);
The above code will then handle the velocity problems for us and can be used in place of setVelocity.
I am currently programming with the Microsoft Kinect for Windows SDK 2 on Windows 8.1. Things are going well, and in a home dev environment obviously there is not much noise in the background compared to the 'real world'.
I would like to seek some advice from those with experience in 'real world' applications with the Kinect. How does Kinect (especially v2) fare in a live environment with passers-by, onlookers and unexpected objects in the background? I do expect, in the space from the Kinect sensor to the user there will usually not be interference however - what I am very mindful of right now is the background noise as such.
While I am aware that the Kinect does not track well under direct sunlight (either on the sensor or the user) - are there certain lighting conditions or other external factors I need to factor into the code?
The answer I am looking for is:
What kind of issues can arise in a live environment?
How did you code or work your way around it?
Outlaw Lemur has descibed in detail most of the issues you may encounter in real-world scenarios.
Using Kinect for Windows version 2, you do not need to adjust the motor, since there is no motor and the sensor has a larger field of view. This will make your life much easier.
I would like to add the following tips and advice:
1) Avoid direct light (physical or internal lighting)
Kinect has an infrared sensor that might be confused. This sensor should not have direct contact with any light sources. You can emulate such an environment at your home/office by playing with an ordinary laser pointer and torches.
2) If you are tracking only one person, select the closest tracked user
If your app only needs one player, that player needs to be a) fully tracked and b) closer to the sensor than the others. It's an easy way to make participants understand who is tracked without making your UI more complex.
public static Body Default(this IEnumerable<Body> bodies)
{
Body result = null;
double closestBodyDistance = double.MaxValue;
foreach (var body in bodies)
{
if (body.IsTracked)
{
var position = body.Joints[JointType.SpineBase].Position;
var distance = position.Length();
if (result == null || distance < closestBodyDistance)
{
result = body;
closestBodyDistance = distance;
}
}
}
return result;
}
3) Use the tracking IDs to distinguish different players
Each player has a TrackingID property. Use that property when players interfere or move at random positions. Do not use that property as an alternative to face recognition though.
ulong _trackinfID1 = 0;
ulong _trackingID2 = 0;
void BodyReader_FrameArrived(object sender, BodyFrameArrivedEventArgs e)
{
using (var frame = e.FrameReference.AcquireFrame())
{
if (frame != null)
{
frame.GetAndRefreshBodyData(_bodies);
var bodies = _bodies.Where(b => b.IsTracked).ToList();
if (bodies != null && bodies.Count >= 2 && _trackinfID1 == 0 && _trackingID2 == 0)
{
_trackinfID1 = bodies[0].TrackingId;
_trackingID2 = bodies[1].TrackingId;
// Alternatively, specidy body1 and body2 according to their distance from the sensor.
}
Body first = bodies.Where(b => b.TrackingId == _trackinfID1).FirstOrDefault();
Body second = bodies.Where(b => b.TrackingId == _trackingID2).FirstOrDefault();
if (first != null)
{
// Do something...
}
if (second != null)
{
// Do something...
}
}
}
}
4) Display warnings when a player is too far or too close to the sensor.
To achieve higher accuracy, players need to stand at a specific distance: not too far or too close to the sensor. Here's how to check this:
const double MIN_DISTANCE = 1.0; // in meters
const double MAX_DISTANCE = 4.0; // in meters
double distance = body.Joints[JointType.SpineBase].Position.Z; // in meters, too
if (distance > MAX_DISTANCE)
{
// Prompt the player to move closer.
}
else if (distance < MIN_DISTANCE)
{
// Prompt the player to move farther.
}
else
{
// Player is in the right distance.
}
5) Always know when a player entered or left the scene.
Vitruvius provides an easy way to understand when someone entered or left the scene.
Here is the source code and here is how to use it in your app:
UsersController userReporter = new UsersController();
userReporter.BodyEntered += UserReporter_BodyEntered;
userReporter.BodyLeft += UserReporter_BodyLeft;
userReporter.Start();
void UserReporter_BodyEntered(object sender, UsersControllerEventArgs e)
{
// A new user has entered the scene. Get the ID from e param.
}
void UserReporter_BodyLeft(object sender, UsersControllerEventArgs e)
{
// A user has left the scene. Get the ID from e param.
}
6) Have a visual clue of which player is tracked
If there are a lot of people surrounding the player, you may need to show on-screen who is tracked. You can highlight the depth frame bitmap or use Microsoft's Kinect Interactions.
This is an example of removing the background and keeping the player pixels only.
7) Avoid glossy floors
Some floors (bright, glossy) may mirror people and Kinect may confuse some of their joints (for example, Kinect may extend your legs to the reflected body). If you can't avoid glossy floors, use the FloorClipPlane property of your BodyFrame. However, the best solution would be to have a simple carpet where you expect people to stand. A carpet would also act as an indication of the proper distance, so you would provide a better user experience.
I created an application for home use like you have before, and then presented that same application in a public setting. The result was embarrassing for me, because there were many errors that I would never have anticipated within a controlled environment. However that did help me because it led me to add some interesting adjustments to my code, which is centered around human detection only.
Have conditions for checking the validity of a "human".
When I showed my application in the middle of a presentation floor with many other objects and props, I found that even chairs could be mistaken for people for brief moments, which led to my application switching between the user and an inanimate object, causing it to lose track of the user and lost their progress. To counter this or other false-positive human detections, I added my own additional checks for a human. My most successful method was comparing the proportions of a humans body. I implemented this measured in head units. (head units picture) Below is code of how I did this (SDK version 1.8, C#)
bool PersonDetected = false;
double[] humanRatios = { 1.0f, 4.0, 2.33, 3.0 };
/*Array indexes
* 0 - Head (shoulder to head)
* 1 - Leg length (foot to knee to hip)
* 2 - Width (shoulder to shoulder center to shoulder)
* 3 - Torso (hips to shoulder)
*/
....
double[] currentRatios = new double[4];
double headSize = Distance(skeletons[0].Joints[JointType.ShoulderCenter], skeletons[0].Joints[JointType.Head]);
currentRatios[0] = 1.0f;
currentRatios[1] = (Distance(skeletons[0].Joints[JointType.FootLeft], skeletons[0].Joints[JointType.KneeLeft]) + Distance(skeletons[0].Joints[JointType.KneeLeft], skeletons[0].Joints[JointType.HipLeft])) / headSize;
currentRatios[2] = (Distance(skeletons[0].Joints[JointType.ShoulderLeft], skeletons[0].Joints[JointType.ShoulderCenter]) + Distance(skeletons[0].Joints[JointType.ShoulderCenter], skeletons[0].Joints[JointType.ShoulderRight])) / headSize;
currentRatios[3] = Distance(skeletons[0].Joints[JointType.HipCenter], skeletons[0].Joints[JointType.ShoulderCenter]) / headSize;
int correctProportions = 0;
for (int i = 1; i < currentRatios.Length; i++)
{
diff = currentRatios[i] - humanRatios[i];
if (abs(diff) <= MaximumDiff)//I used .2 for my MaximumDiff
correctProportions++;
}
if (correctProportions >= 2)
PersonDetected = true;
Another method I had success with was finding the average of the sum of the joints distance squared from one another. I found that non-human detections had more variable summed distances, whereas humans are more consistent. The average I learned using a single dimensional support vector machine (I found user's summed distances were generally less than 9)
//in AllFramesReady or SkeletalFrameReady
Skeleton data;
...
float lastPosX = 0; // trying to detect false-positives
float lastPosY = 0;
float lastPosZ = 0;
float diff = 0;
foreach (Joint joint in data.Joints)
{
//add the distance squared
diff += (joint.Position.X - lastPosX) * (joint.Position.X - lastPosX);
diff += (joint.Position.Y - lastPosY) * (joint.Position.Y - lastPosY);
diff += (joint.Position.Z - lastPosZ) * (joint.Position.Z - lastPosZ);
lastPosX = joint.Position.X;
lastPosY = joint.Position.Y;
lastPosZ = joint.Position.Z;
}
if (diff < 9)//this is what my svm learned
PersonDetected = true;
Use player IDs and indexes to remember who is who
This ties in with the previous issue, where if Kinect switched the two users that it was tracking to others, then my application would crash because of the sudden changes in data. To counter this, I would keep track of both each player's skeletal index and their player ID. To learn more about how I did this, see Kinect user Detection.
Add adjustable parameters to adopt to varying situations
Where I was presenting, the same tilt angle and other basic kinect parameters (like near-mode) did not work in the new environment. Let the user be able to adjust some of these parameters so they can get the best setup for the job.
Expect people to do stupid things
The next time I presented, I had adjustable tilt, and you can guess whether someone burned out the Kinect's motor. Anything that can be broken on Kinect, someone will break. Leaving a warning in your documentation will not be sufficient. You should add in cautionary checks on Kinect's hardware to make sure people don't get upset when they break something inadvertently. Here is some code checking whether the user has used the motor more than 20 times in two minutes.
int motorAdjustments = 0;
DateTime firstAdjustment;
...
//in motor adjustment code
if (motorAdjustments == 0)
firstAdjustment = DateTime.Now;
++motorAdjustments;
if (motorAdjustments < 20)
{
//adjust the tilt
}
else
{
DateTime timeCheck = firstAdjustment;
if (DateTime.Now > timeCheck.AddMinutes(2))
{
//reset all variables
motorAdjustments = 1;
firstAdjustment = DateTime.Now;
//adjust the tilt
}
}
I would note that all of these were issues for me with the first version of Kinect, and I don't know how many of them have been solved in the second version as I sadly haven't gotten my hands on one yet. However I would still implement some of these techniques if not back-up techniques because there will be exceptions, especially in computer vision.
I am new to OpenGL and I have been using The Red Book, and the Super Bible. In the SB, I have gotten to the section about using objects loaded from files. So far, I don't think I have a problem understanding what is going on and how to do it, but it got me thinking about making my own mesh within my own app--in essence, a modeling app. I have done a lot of searching through both of my references as well as the internet, and I have yet to find a nice tutorial about implementing such functionality into one's own App. I found an API that just provides this functionality, but I am trying to understand the implementation; not just the interface.
Thus far, I have created an "app" (I use this term lightly), that gives you a view that you can click in and add vertices. The vertices don't connect, just are just displayed where you click. My concern is that this method I stumbled upon while experimenting is not the way I should be implementing this process.
I am working on a Mac and using Objective-C and C in Xcode.
MyOpenGLView.m
#import "MyOpenGLView.h"
#interface MyOpenGLView () {
NSTimer *_renderTimer
Gluint VAO, VBO;
GLuint totalVertices;
GLsizei bufferSize;
}
#end
#implementation MyOpenGLView
/* Set up OpenGL view with a context and pixelFormat with doubleBuffering */
/* NSTimer implementation */
- (void)drawS3DView {
currentTime = CACurrentMediaTime();
NSOpenGLContext *currentContext = self.openGLContext;
[currentContext makeCurrentContext];
CGLLockContext([currentContext CGLContextObj]);
const GLfloat color[] = {
sinf(currentTime * 0.2),
sinf(currentTime * 0.3),
cosf(currentTime * 0.4),
1.0
};
glClearBufferfv(GL_COLOR, 0, color);
glUseProgram(shaderProgram);
glBindVertexArray(VAO);
glPointSize(10);
glDrawArrays(GL_POINTS, 0, totalVertices);
CGLFlushDrawable([currentContext CGLContextObj]);
CGLUnlockContext([currentContext CGLContextObj]);
}
#pragma mark - User Interaction
- (void)mouseUp:(NSEvent *)theEvent {
NSPoint mouseLocation = [theEvent locationInWindow];
NSPoint mouseLocationInView = [self convertPoint:mouseLocation fromView:self];
GLfloat x = -1 + mouseLocationInView.x * 2/(GLfloat)self.bounds.size.width;
GLfloat y = -1 + mouseLocationInView.y * 2/(GLfloat)self.bounds.size.height;
NSOpenGLContext *currentContext = self.openGLContext;
[currentContext makeCurrentContext];
CGLLockContext([currentContext CGLContextObj]);
[_renderer addVertexWithLocationX:x locationY:y];
CGLUnlockContext([currentContext CGLContextObj]);
}
- (void)addVertexWithLocationX:(GLfloat)x locationY:(GLfloat)y {
glBindBuffer(GL_ARRAY_BUFFER, VBO);
GLfloat vertices[(totalVertices * 2) + 2];
glGetBufferSubData(GL_ARRAY_BUFFER, 0, (totalVertices * 2), vertices);
for (int i = 0; i < ((totalVertices * 2) + 2); i++) {
if (i == (totalVertices * 2)) {
vertices[i] = x;
} else if (i == (totalVertices * 2) + 1) {
vertices[i] = y;
}
}
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
totalVertices ++;
}
#end
The app is supposed take the location of the mouse click and provide it is a vertex location. With each added vertex, I first bind the VBO to make sure it is active. Next, I create a new array to hold my current vertex location's (totalVertices) plus space for one more vertex (+ 2 for x and y). Then I used glGetBufferSubData to bring the data back from the VBO and put it into this array. Using a for loop I add the X and Y numbers to the end of the array. Finally, I send this data back to the GPU into a VBO and call totalVertices++ so I know how many vertices I have in the array next time I want to add a vertex.
This brings me to my question: Am I doing this right? Put another way, should I be keeping a copy of the BufferData on the CPU side so that I don't have to call out to the GPU and have the data sent back for editing? In that way, I wouldn't call glGetBufferSubData, I would just create a bigger array, add the new vertex to the end, and then call glBufferData to realloc the VBO with the updated vertex data.
** I tried to include my thinking process so that someone like myself who is very inexperienced in programming can hopefully understand what I am trying to do. I don't want anyone to be offended by my explanations of what I did. **
I would certainly avoid reading the data back. Not only because of the extra data copy, but also to avoid synchronization between CPU and GPU.
When you make an OpenGL call, you can picture the driver building a GPU command, queuing it up for later submission to the GPU, and then returning. These commands will then be submitted to the GPU at a later point. The idea is that the GPU can run as independently as possible from whatever runs on the CPU, which includes your application. CPU and GPU operating in parallel with minimal dependencies is very desirable for performance.
For most glGet*() calls, this asynchronous execution model breaks down. They will often have to wait until the GPU completed all (or at least some) pending commands before they can return the data. So the CPU might block while only the GPU is running, which is undesirable.
For that reason, you should definitely keep your CPU copy of the data so that you don't ever have to read it back.
Beyond that, there are a few options. It will all depend on your usage pattern, the performance characteristics of the specific platform, etc. To really get the maximum out of it, there's no way around implementing multiple variations, and benchmarking them.
For what you're describing, I would probably start with something that works similar to a std::vector in C++. You allocate a certain amount of memory (typically named capacity) that is larger than what you need at the moment. Then you can add data without reallocating, until you fill the allocated capacity. At that point, you can for example double the capacity.
Applying this to OpenGL, you can reserve a certain amount of memory by calling glBufferData() with NULL as the data pointer. Keep track of the capacity you allocated, and populate the buffer with calls to glBufferSubData(). When adding a single point in your example code, you would call glBufferSubData() with just the new point. Only when you run out of capacity, you call glBufferData() with a new capacity, and then fill it with all the data you already have.
In pseudo-code, the initialization would looks something like this:
int capacity = 10;
glBufferData(GL_ARRAY_BUFFER,
capacity * sizeof(Point), NULL, GL_DYNAMIC_DRAW);
std::vector<Point> data;
Then each time you add a point:
data.push_back(newPoint);
if (data.size() <= capacity) {
glBufferSubData(GL_ARRAY_BUFFER,
(data.size() - 1) * sizeof(Point), sizeof(Point), &newPoint);
} else {
capacity *= 2;
glBufferData(GL_ARRAY_BUFFER,
capacity * sizeof(Point), NULL, GL_DYNAMIC_DRAW);
glBufferSubData(GL_ARRAY_BUFFER,
0, data.size() * sizeof(Point), &data[0]);
}
As an alternative to glBufferSubData(), glMapBufferRange() is another option to consider for updating buffer data. Going farther, you can look into using multiple buffers, and cycle through them, instead of updating just a single buffer. This is where benchmarking comes into play, because there isn't a single approach that will be best for every possible platform and use case.
I am currently writing a program to help me control complex lights installations. The idea is I tell the program to start a preset, then the app has three options (depending on the preset type)
1) the lights go to one position (so only one group of data sent when the preset starts)
2) the lights follows a mathematical equation (ex: sinus with a timer to make smooth circles)
3) the lights respond to a flow of data (ex midi controller)
So I decided to go with an object I call the AppBrain, that receive data from the controllers and the templates, but also is able to send processed data to the lights.
Now, I come from non-native programming, and I kinda have trust issues concerning working with a lot of processing, events and timing; as well as troubles with understanding 100% the Cocoa logic.
This is where the actual question starts, sorry
What I want to do, would be when I load the preset, I parse it to prepare the timer/data receive event so it doesn't have to go trough every option for 100 lights 100 times per second.
To explain more deeply, here's how I would do it in Javascript (crappy pseudo code, of course)
var lightsFunctions = {};
function prepareTemplate(theTemplate){
//Let's assume here the template is just an array, and I won't show all the processing
switch(theTemplate.typeOfTemplate){
case "simpledata":
sendAllDataTooLights(); // Simple here
break;
case "periodic":
for(light in theTemplate.lights){
switch(light.typeOfEquation){
case "sin":
lightsFunctions[light.id] = doTheSinus; // doTheSinus being an existing function
break;
case "cos":
...
}
}
function onFrame(){
for(light in lightsFunctions){
lightsFunctions[light]();
}
}
var theTimer = setTimeout(onFrame, theTemplate.delay);
break;
case "controller":
//do the same pre-processing without the timer, to know which function to execute for which light
break;
}
}
}
So, my idea is to store the processing function I need in an NSArray, so I don't need to test on each frame the type and loose some time/CPU.
I don't know if I'm clear, or if my idea is possible/the good way to go. I'm mostly looking for algorithm ideas, and if you have some code that might direct me in the good direction... (I know of PerformSelector, but I don't know if it is the best for this situation.
Thanks;
I_
First of all, don't spend time optimizing what you don't know is a performance problem. 100 iterations of the type is nothing in the native world, even on the weaker mobile CPUs.
Now, to your problem. I take it you are writing some kind of configuration / DSL to specify the light control sequences. One way of doing it is to store blocks in your NSArray. A block is the equivalent of a function object in JavaScript. So for example:
typedef void (^LightFunction)(void);
- (NSArray*) parseProgram ... {
NSMutableArray* result = [NSMutableArray array];
if(...) {
LightFunction simpledata = ^{ sendDataToLights(); };
[result addObject:simpleData];
} else if(...) {
Light* light = [self getSomeLight:...];
LightFunction periodic = ^{
// Note how you can access the local scope of the outside function.
// Make sure you use automatic reference counting for this.
[light doSomethingWithParam:someParam];
};
[result addObject:periodic];
}
return result;
}
...
NSArray* program = [self parseProgram:...];
// To run your program
for(LightFunction func in program) {
func();
}