How do I gradually apply velocity in Spigot? - velocity

I am using player.setVelocity(player.getLocation().getDirection().multiply(Main.instance.getConfig().getDouble("velocity_multiplier")).setY(Main.instance.getConfig().getInt("Y_axis"))); to set velocity to a player. It allows high configuration of movement via config, but the problem is that when you set it too high, Spigot blocks it. I do not want to enable:
server.properties: allow_flight.
So how can I avoid this? I bumped up the multiplier to 30 just for a test, and it would start to move you, glitch, and pull you back down. It also says that the player moved too quickly in console even from smaller amounts of velocity. I was thinking of making it gradually apply the velocity. When you jump, it applies the starting velocity and as you go it goes higher(Y_axis) and farther(velocity_multiplier), but I do not know how to do that.

You can enable just for the player before applying the velocity and in a delayed task disabled it
public void blabla(Player player){
player.setAllowFlight(true);
player.setVelocity(player.getLocation().getDirection().multiply(Main.instance.getConfig().getDouble("velocity_multiplier")).setY(Main.instance.getConfig().getInt("Y_axis")));
new BukkitRunnable() {
#Override
public void run() {
player.setAllowFlight(false);
}
}.runTaskLater(this, 20 * 5);
}
In the code I used 20 * 5 to disable the flight after 5 seconds, you can change it to what you want.

Beyond code, you likely would be best situated to address your issue by allowing flight in the Spigot file and installing or developing an anti-cheat in the game. Spigot's flight protection works poorly with many plugins and does not successfully block many players who attempt to fly.
Best advice would be to look beyond a makeshift code solution and rather create your own anti-fly.

The maximum velocity in bukkit (and spigot) is 10 blocks per tick. This is all directions combined.
If your initial velocity is to high, you can use the scheduler to repeatedly calculate the next velocity.
To calculate this, we need some magic values: The following values come from The Minecraft Wiki.
private final static double DECELERATION_RATE = 0.98D;
private final static double GRAVITY_CONSTANT = 0.08D;
private final static double VANILA_ANTICHEAT_THRESHOLD = 9.5D; // actual 10D
We first need to calculate the spot the player would reach using those speeds, and then teleport him while applying the velocity for the first part.
We are going to use a BukkitRunnable to run a task that calculates the above:
Vector speed = ...;
Player player = ...;
new BukkitRunnable() {
double velY = speed.getY();
Location locCached = new Location(null,0,0,0);
#Override
public void run() {
if (velY > VANILA_ANTICHEAT_THRESHOLD) {
player.getLocation(locCached).setY(locCached.getY() + velY);
player.teleport(locCached);
player.setVelocity(new Vector(0,ANILA_ANTICHEAT_THRESHOLD,0));
} else {
player.setVelocity(new Vector(0,velY,0));
this.cancel();
}
velY -= GRAVITY_CONSTANT;
velY *= DECELERATION_RATE;
}
}.runTaskTimer(plugin,0,1);
The above code will then handle the velocity problems for us and can be used in place of setVelocity.

Related

Optaplanner: NullPointerException when calling scoreDirector.beforeVariableChanged in a simple custom move

I am building a Capacited Vehicle Routing Problem with Time Windows, but with one small difference when compared to the one provided in examples from the documentation: I don't have a depot. Instead, each order has a pickup step, and a delivery step, in two different locations.
(like in the Vehicle Routing example from the documentation, the previousStep planning variable has the CHAINED graph type, and its valueRangeProviderRefs includes both Drivers, and Steps)
This difference adds a couple of constraints:
the pickup and delivery steps of a given order must be handled by the same driver
the pickup must be before the delivery
After experimenting with constraints, I have found that it would be more efficient to implement two types of custom moves:
assign both steps of an order to a driver
rearrange the steps of a driver
I am currently implementing that first custom move. My solver's configuration looks like this:
SolverFactory<RoutingProblem> solverFactory = SolverFactory.create(
new SolverConfig()
.withSolutionClass(RoutingProblem.class)
.withEntityClasses(Step.class, StepList.class)
.withScoreDirectorFactory(new ScoreDirectorFactoryConfig()
.withConstraintProviderClass(Constraints.class)
)
.withTerminationConfig(new TerminationConfig()
.withSecondsSpentLimit(60L)
)
.withPhaseList(List.of(
new LocalSearchPhaseConfig()
.withMoveSelectorConfig(CustomMoveListFactory.getConfig())
))
);
My CustomMoveListFactory looks like this (I plan on migrating it to an MoveIteratorFactory later, but for the moment, this is easier to read and write):
public class CustomMoveListFactory implements MoveListFactory<RoutingProblem> {
public static MoveListFactoryConfig getConfig() {
MoveListFactoryConfig result = new MoveListFactoryConfig();
result.setMoveListFactoryClass(CustomMoveListFactory.class);
return result;
}
#Override
public List<? extends Move<RoutingProblem>> createMoveList(RoutingProblem routingProblem) {
List<Move<RoutingProblem>> moves = new ArrayList<>();
// 1. Assign moves
for (Order order : routingProblem.getOrders()) {
Driver currentDriver = order.getDriver();
for (Driver driver : routingProblem.getDrivers()) {
if (!driver.equals(currentDriver)) {
moves.add(new AssignMove(order, driver));
}
}
}
// 2. Rearrange moves
// TODO
return moves;
}
}
And finally, the move itself looks like this (nevermind the undo or the isDoable for the moment):
#Override
protected void doMoveOnGenuineVariables(ScoreDirector<RoutingProblem> scoreDirector) {
assignStep(scoreDirector, order.getPickupStep());
assignStep(scoreDirector, order.getDeliveryStep());
}
private void assignStep(ScoreDirector<RoutingProblem> scoreDirector, Step step) {
StepList beforeStep = step.getPreviousStep();
Step afterStep = step.getNextStep();
// 1. Insert step at the end of the driver's step list
StepList lastStep = driver.getLastStep();
scoreDirector.beforeVariableChanged(step, "previousStep"); // NullPointerException here
step.setPreviousStep(lastStep);
scoreDirector.afterVariableChanged(step, "previousStep");
// 2. Remove step from current chained list
if (afterStep != null) {
scoreDirector.beforeVariableChanged(afterStep, "previousStep");
afterStep.setPreviousStep(beforeStep);
scoreDirector.afterVariableChanged(afterStep, "previousStep");
}
}
The idea being that at no point I'm doing an invalid chained list manipulation:
However, as the title and the code comment indicate, I am getting a NullPointerException when I call scoreDirector.beforeVariableChanged. None of my variables are null (I've printed them to make sure). The NullPointerException doesn't occur in my code, but deep inside Optaplanner's inner workings, making it difficult for me to fix it:
Exception in thread "main" java.lang.NullPointerException
at org.drools.core.common.NamedEntryPoint.update(NamedEntryPoint.java:353)
at org.drools.core.common.NamedEntryPoint.update(NamedEntryPoint.java:338)
at org.drools.core.impl.StatefulKnowledgeSessionImpl.update(StatefulKnowledgeSessionImpl.java:1579)
at org.drools.core.impl.StatefulKnowledgeSessionImpl.update(StatefulKnowledgeSessionImpl.java:1551)
at org.optaplanner.core.impl.score.stream.drools.DroolsConstraintSession.update(DroolsConstraintSession.java:49)
at org.optaplanner.core.impl.score.director.stream.ConstraintStreamScoreDirector.afterVariableChanged(ConstraintStreamScoreDirector.java:137)
at org.optaplanner.core.impl.domain.variable.inverserelation.SingletonInverseVariableListener.retract(SingletonInverseVariableListener.java:96)
at org.optaplanner.core.impl.domain.variable.inverserelation.SingletonInverseVariableListener.beforeVariableChanged(SingletonInverseVariableListener.java:46)
at org.optaplanner.core.impl.domain.variable.listener.support.VariableListenerSupport.beforeVariableChanged(VariableListenerSupport.java:170)
at org.optaplanner.core.impl.score.director.AbstractScoreDirector.beforeVariableChanged(AbstractScoreDirector.java:430)
at org.optaplanner.core.impl.score.director.AbstractScoreDirector.beforeVariableChanged(AbstractScoreDirector.java:390)
at test.optaplanner.solver.AssignMove.assignStep(AssignMove.java:98)
at test.optaplanner.solver.AssignMove.doMoveOnGenuineVariables(AssignMove.java:85)
at org.optaplanner.core.impl.heuristic.move.AbstractMove.doMove(AbstractMove.java:35)
at org.optaplanner.core.impl.heuristic.move.AbstractMove.doMove(AbstractMove.java:30)
at org.optaplanner.core.impl.score.director.AbstractScoreDirector.doAndProcessMove(AbstractScoreDirector.java:187)
at org.optaplanner.core.impl.localsearch.decider.LocalSearchDecider.doMove(LocalSearchDecider.java:132)
at org.optaplanner.core.impl.localsearch.decider.LocalSearchDecider.decideNextStep(LocalSearchDecider.java:116)
at org.optaplanner.core.impl.localsearch.DefaultLocalSearchPhase.solve(DefaultLocalSearchPhase.java:70)
at org.optaplanner.core.impl.solver.AbstractSolver.runPhases(AbstractSolver.java:98)
at org.optaplanner.core.impl.solver.DefaultSolver.solve(DefaultSolver.java:189)
at test.optaplanner.OptaPlannerService.testOptaplanner(OptaPlannerService.java:68)
at test.optaplanner.App.main(App.java:13)
Is there something I did wrong? It seems I am following the documentation for custom moves fairly closely, outside of the fact that I am using exclusively java code instead of drools.
The initial solution I feed to the solver has all of the steps assigned to a single driver. There are 15 drivers and 40 orders.
In order to bypass this error, I have tried a number of different things:
remove the shadow variable annotation, turn Driver into a problem fact, and handle the nextStep field myself => this makes no difference
use Simulated Annealing + First Fit Decreasing construction heuristics, and start with steps not assigned to any driver (this was inspired by looking up the example here, which is more complete than the one from the documentation) => the NullPointerException appears on afterVariableChanged instead, but it still appears.
a number of other things which were probably not very smart
But without a more helpful error message, I can't think of anything else to try.
Thank you for your help

Htc Vive + leap motion as controller

I'm trying to implement a feature similar to HTC Vive's controller with the leap motion on my Unity project. I wanted to generate a laser pointer from the index finger and teleport the Vive's room on the position of the laser (as it's done with the controller). The problem is the latest leap motion (orion) documentation, it's unclear. Any ideas how to do that? More in general, we thought about using HandController but we don't understand where to add the script component.
Thanks!
It's unclear to me whether the problem you're having is getting hand data in your scene at all, or using that hand data.
If you're just trying to get hand data in your scene, you can copy a prefab from one of the Unity SDK's example scenes. If you're trying to integrate Leap into an existing scene that already has a VR rig set up, check out the documentation on the core Leap components to understand what pieces need to be in place for you to start getting Hand data. LeapServiceProvider has to be somewhere in your scene to receive hand data.
As long as you have a LeapServiceProvider somewhere, you can access hands from the Leap Motion from any script, anywhere. So for getting a ray from the index fingertip, just pop this script any old place:
using Leap;
using Leap.Unity;
using UnityEngine;
public class IndexRay : MonoBehaviour {
void Update() {
Hand rightHand = Hands.Right;
Vector3 indexTipPosition = rightHand.Fingers[1].TipPosition.ToVector3();
Vector3 indexTipDirection = rightHand.Fingers[1].bones[3].Direction.ToVector3();
// You can try using other bones in the index finger for direction as well;
// bones[3] is the last bone; bones[1] is the bone extending from the knuckle;
// bones[0] is the index metacarpal bone.
Debug.DrawRay(indexTipPosition, indexTipDirection, Color.cyan);
}
}
For what it's worth, the index fingertip direction is probably not going to be stable enough to do what you want. A more reliable strategy is to cast a line from the camera (or a theoretical "shoulder position" at a constant offset from the camera) through the index knuckle bone of the hand:
using Leap;
using Leap.Unity;
using UnityEngine;
public class ProjectiveRay : MonoBehaviour {
// To find an approximate shoulder, let's try 12 cm right, 15 cm down, and 4 cm back relative to the camera.
[Tooltip("An approximation for the shoulder position relative to the VR camera in the camera's (non-scaled) local space.")]
public Vector3 cameraShoulderOffset = new Vector3(0.12F, -0.15F, -0.04F);
public Transform shoulderTransform;
void Update() {
Hand rightHand = Hands.Right;
Vector3 cameraPosition = Camera.main.transform.position;
Vector3 shoulderPosition = cameraPosition + Camera.main.transform.rotation * cameraShoulderOffset;
Vector3 indexKnucklePosition = rightHand.Fingers[1].bones[1].PrevJoint.ToVector3();
Vector3 dirFromShoulder = (indexKnucklePosition - shoulderPosition).normalized;
Debug.DrawRay(indexKnucklePosition, dirFromShoulder, Color.white);
Debug.DrawLine(shoulderPosition, indexKnucklePosition, Color.red);
}
}

Render glitch with custom block boundaries minecraft

I'm creating a mod for Minecraft. Recently, I've tried to make a custom block, and I'm having two issues with it.
My main issue is that the block is rendering incorrectly. I want the block to be smaller in size than a full block. I successfully changed the block boundaries with setBlockBounds(), and while that did make the block render smaller and use the smaller boundaries, it causes other rendering issues. When I place the block, the floor below is becomes invisible and I can see through it, either to caves below, blocks behind it, or the void if there is nothing there. How do I fix that block not rendering? Here's a screenshot:
Additionally, my goal for this block is to emit an "aura" that gives players around it speed or some other potion effect. I have the basic code for finding players around the block and giving them speed, but I can't find a way to activate this method every tick or every X amount of ticks to ensure that it gives players within the box speed in a reliable manner. There are already some blocks in the normal game that do this, so it must be possible. How can I do this?
For your first issue, you need to override isOpaqueCube to return false. You'll also want to override isFullCube for other parts of the code, but that isn't as important for rendering. Example:
public class YourBlock {
// ... existing code ...
/**
* Used to determine ambient occlusion and culling when rebuilding chunks for render
*/
#Override
public boolean isOpaqueCube(IBlockState state) {
return false;
}
#Override
public boolean isFullCube(IBlockState state) {
return false;
}
}
Here's some info on rendering that mentions this.
Regarding your second problem, that's more complicated. It's generally achieved via a tile entity, though you can also use block updates (which is much slower). Good examples of this are BlockBeacon and TileEntityBeacon (for using tile entities) and BlockFrostedIce (for block updates). Here's some (potentially out of date) info on tile entities.
Here's an (untested) example of getting an update each tick this with tile entities:
public class YourBlock {
// ... existing code ...
/**
* Returns a new instance of a block's tile entity class. Called on placing the block.
*/
#Override
public TileEntity createNewTileEntity(World worldIn, int meta) {
return new TileEntityYourBlock();
}
}
/**
* Tile entity for your block.
*
* Tile entities normally store data, but they can also receive an update each
* tick, but to do so they must implement ITickable. So, don't forget the
* "implements ITickable".
*/
public class TileEntityYourBlock extends TileEntity implements ITickable {
#Override
public void update() {
// Your code to give potion effects to nearby players would go here
// If you only want to do it every so often, you can check like this:
if (this.worldObj.getTotalWorldTime() % 80 == 0) {
// Only runs every 80 ticks (4 seconds)
}
}
// The following code isn't required to make a tile entity that gets ticked,
// but you'll want it if you want (EG) to be able to set the effect.
/**
* Example potion effect.
* May be null.
*/
private Potion effect;
public void setEffect(Potion potionEffect) {
this.effect = potionEffect;
}
public Potion getEffect() {
return this.effect;
}
#Override
public void readFromNBT(NBTTagCompound compound) {
super.readFromNBT(compound);
int effectID = compound.getInteger("Effect")
this.effect = Potion.getPotionById(effectID);
}
public void writeToNBT(NBTTagCompound compound) {
super.writeToNBT(compound);
int effectID = Potion.getIdFromPotion(this.effect);
compound.setInteger("Effect", effectID);
}
}
// This line needs to go in the main registration.
// The ID can be anything so long as it isn't used by another mod.
GameRegistry.registerTileEntity(TileEntityYourBlock.class, "YourBlock");

Microsoft Kinect and background/environmental noise

I am currently programming with the Microsoft Kinect for Windows SDK 2 on Windows 8.1. Things are going well, and in a home dev environment obviously there is not much noise in the background compared to the 'real world'.
I would like to seek some advice from those with experience in 'real world' applications with the Kinect. How does Kinect (especially v2) fare in a live environment with passers-by, onlookers and unexpected objects in the background? I do expect, in the space from the Kinect sensor to the user there will usually not be interference however - what I am very mindful of right now is the background noise as such.
While I am aware that the Kinect does not track well under direct sunlight (either on the sensor or the user) - are there certain lighting conditions or other external factors I need to factor into the code?
The answer I am looking for is:
What kind of issues can arise in a live environment?
How did you code or work your way around it?
Outlaw Lemur has descibed in detail most of the issues you may encounter in real-world scenarios.
Using Kinect for Windows version 2, you do not need to adjust the motor, since there is no motor and the sensor has a larger field of view. This will make your life much easier.
I would like to add the following tips and advice:
1) Avoid direct light (physical or internal lighting)
Kinect has an infrared sensor that might be confused. This sensor should not have direct contact with any light sources. You can emulate such an environment at your home/office by playing with an ordinary laser pointer and torches.
2) If you are tracking only one person, select the closest tracked user
If your app only needs one player, that player needs to be a) fully tracked and b) closer to the sensor than the others. It's an easy way to make participants understand who is tracked without making your UI more complex.
public static Body Default(this IEnumerable<Body> bodies)
{
Body result = null;
double closestBodyDistance = double.MaxValue;
foreach (var body in bodies)
{
if (body.IsTracked)
{
var position = body.Joints[JointType.SpineBase].Position;
var distance = position.Length();
if (result == null || distance < closestBodyDistance)
{
result = body;
closestBodyDistance = distance;
}
}
}
return result;
}
3) Use the tracking IDs to distinguish different players
Each player has a TrackingID property. Use that property when players interfere or move at random positions. Do not use that property as an alternative to face recognition though.
ulong _trackinfID1 = 0;
ulong _trackingID2 = 0;
void BodyReader_FrameArrived(object sender, BodyFrameArrivedEventArgs e)
{
using (var frame = e.FrameReference.AcquireFrame())
{
if (frame != null)
{
frame.GetAndRefreshBodyData(_bodies);
var bodies = _bodies.Where(b => b.IsTracked).ToList();
if (bodies != null && bodies.Count >= 2 && _trackinfID1 == 0 && _trackingID2 == 0)
{
_trackinfID1 = bodies[0].TrackingId;
_trackingID2 = bodies[1].TrackingId;
// Alternatively, specidy body1 and body2 according to their distance from the sensor.
}
Body first = bodies.Where(b => b.TrackingId == _trackinfID1).FirstOrDefault();
Body second = bodies.Where(b => b.TrackingId == _trackingID2).FirstOrDefault();
if (first != null)
{
// Do something...
}
if (second != null)
{
// Do something...
}
}
}
}
4) Display warnings when a player is too far or too close to the sensor.
To achieve higher accuracy, players need to stand at a specific distance: not too far or too close to the sensor. Here's how to check this:
const double MIN_DISTANCE = 1.0; // in meters
const double MAX_DISTANCE = 4.0; // in meters
double distance = body.Joints[JointType.SpineBase].Position.Z; // in meters, too
if (distance > MAX_DISTANCE)
{
// Prompt the player to move closer.
}
else if (distance < MIN_DISTANCE)
{
// Prompt the player to move farther.
}
else
{
// Player is in the right distance.
}
5) Always know when a player entered or left the scene.
Vitruvius provides an easy way to understand when someone entered or left the scene.
Here is the source code and here is how to use it in your app:
UsersController userReporter = new UsersController();
userReporter.BodyEntered += UserReporter_BodyEntered;
userReporter.BodyLeft += UserReporter_BodyLeft;
userReporter.Start();
void UserReporter_BodyEntered(object sender, UsersControllerEventArgs e)
{
// A new user has entered the scene. Get the ID from e param.
}
void UserReporter_BodyLeft(object sender, UsersControllerEventArgs e)
{
// A user has left the scene. Get the ID from e param.
}
6) Have a visual clue of which player is tracked
If there are a lot of people surrounding the player, you may need to show on-screen who is tracked. You can highlight the depth frame bitmap or use Microsoft's Kinect Interactions.
This is an example of removing the background and keeping the player pixels only.
7) Avoid glossy floors
Some floors (bright, glossy) may mirror people and Kinect may confuse some of their joints (for example, Kinect may extend your legs to the reflected body). If you can't avoid glossy floors, use the FloorClipPlane property of your BodyFrame. However, the best solution would be to have a simple carpet where you expect people to stand. A carpet would also act as an indication of the proper distance, so you would provide a better user experience.
I created an application for home use like you have before, and then presented that same application in a public setting. The result was embarrassing for me, because there were many errors that I would never have anticipated within a controlled environment. However that did help me because it led me to add some interesting adjustments to my code, which is centered around human detection only.
Have conditions for checking the validity of a "human".
When I showed my application in the middle of a presentation floor with many other objects and props, I found that even chairs could be mistaken for people for brief moments, which led to my application switching between the user and an inanimate object, causing it to lose track of the user and lost their progress. To counter this or other false-positive human detections, I added my own additional checks for a human. My most successful method was comparing the proportions of a humans body. I implemented this measured in head units. (head units picture) Below is code of how I did this (SDK version 1.8, C#)
bool PersonDetected = false;
double[] humanRatios = { 1.0f, 4.0, 2.33, 3.0 };
/*Array indexes
* 0 - Head (shoulder to head)
* 1 - Leg length (foot to knee to hip)
* 2 - Width (shoulder to shoulder center to shoulder)
* 3 - Torso (hips to shoulder)
*/
....
double[] currentRatios = new double[4];
double headSize = Distance(skeletons[0].Joints[JointType.ShoulderCenter], skeletons[0].Joints[JointType.Head]);
currentRatios[0] = 1.0f;
currentRatios[1] = (Distance(skeletons[0].Joints[JointType.FootLeft], skeletons[0].Joints[JointType.KneeLeft]) + Distance(skeletons[0].Joints[JointType.KneeLeft], skeletons[0].Joints[JointType.HipLeft])) / headSize;
currentRatios[2] = (Distance(skeletons[0].Joints[JointType.ShoulderLeft], skeletons[0].Joints[JointType.ShoulderCenter]) + Distance(skeletons[0].Joints[JointType.ShoulderCenter], skeletons[0].Joints[JointType.ShoulderRight])) / headSize;
currentRatios[3] = Distance(skeletons[0].Joints[JointType.HipCenter], skeletons[0].Joints[JointType.ShoulderCenter]) / headSize;
int correctProportions = 0;
for (int i = 1; i < currentRatios.Length; i++)
{
diff = currentRatios[i] - humanRatios[i];
if (abs(diff) <= MaximumDiff)//I used .2 for my MaximumDiff
correctProportions++;
}
if (correctProportions >= 2)
PersonDetected = true;
Another method I had success with was finding the average of the sum of the joints distance squared from one another. I found that non-human detections had more variable summed distances, whereas humans are more consistent. The average I learned using a single dimensional support vector machine (I found user's summed distances were generally less than 9)
//in AllFramesReady or SkeletalFrameReady
Skeleton data;
...
float lastPosX = 0; // trying to detect false-positives
float lastPosY = 0;
float lastPosZ = 0;
float diff = 0;
foreach (Joint joint in data.Joints)
{
//add the distance squared
diff += (joint.Position.X - lastPosX) * (joint.Position.X - lastPosX);
diff += (joint.Position.Y - lastPosY) * (joint.Position.Y - lastPosY);
diff += (joint.Position.Z - lastPosZ) * (joint.Position.Z - lastPosZ);
lastPosX = joint.Position.X;
lastPosY = joint.Position.Y;
lastPosZ = joint.Position.Z;
}
if (diff < 9)//this is what my svm learned
PersonDetected = true;
Use player IDs and indexes to remember who is who
This ties in with the previous issue, where if Kinect switched the two users that it was tracking to others, then my application would crash because of the sudden changes in data. To counter this, I would keep track of both each player's skeletal index and their player ID. To learn more about how I did this, see Kinect user Detection.
Add adjustable parameters to adopt to varying situations
Where I was presenting, the same tilt angle and other basic kinect parameters (like near-mode) did not work in the new environment. Let the user be able to adjust some of these parameters so they can get the best setup for the job.
Expect people to do stupid things
The next time I presented, I had adjustable tilt, and you can guess whether someone burned out the Kinect's motor. Anything that can be broken on Kinect, someone will break. Leaving a warning in your documentation will not be sufficient. You should add in cautionary checks on Kinect's hardware to make sure people don't get upset when they break something inadvertently. Here is some code checking whether the user has used the motor more than 20 times in two minutes.
int motorAdjustments = 0;
DateTime firstAdjustment;
...
//in motor adjustment code
if (motorAdjustments == 0)
firstAdjustment = DateTime.Now;
++motorAdjustments;
if (motorAdjustments < 20)
{
//adjust the tilt
}
else
{
DateTime timeCheck = firstAdjustment;
if (DateTime.Now > timeCheck.AddMinutes(2))
{
//reset all variables
motorAdjustments = 1;
firstAdjustment = DateTime.Now;
//adjust the tilt
}
}
I would note that all of these were issues for me with the first version of Kinect, and I don't know how many of them have been solved in the second version as I sadly haven't gotten my hands on one yet. However I would still implement some of these techniques if not back-up techniques because there will be exceptions, especially in computer vision.

Custom Performance Counter / Minute in .NET

I'm trying to create a custom performance counter in C# based on per minute.
So far, I've seen only RateOfCountsPerSecond32 or RateOfCountsPerSecond64 available.
Does anybody know what are options for creating a custom counter based on per minute?
This won't be directly supported. You'll have to computer the rate per minute yourself, and then use a NumberOfItems32 or NumberOfItems64 counter to display the rate. Using a helpful name like "Count / minute" will make it clear what the value is. You'll just update the counter every minute. A background (worker) thread would be a good place to do that.
Alternately, you can just depend upon the monitoring software. Use a NumberOfItems32/64 counter, but have the monitoring software do the per-minute computation. The PerfMon tool built into Windows doesn't do this, but there's no reason it couldn't.
By default PerfMon pulls data every second. In order to get permanent image in Windows performance monitor chart, I've wrote custom counter for measure rate of count per minute.
After working for one minute I become receive data from my counter.
Note that accuracy doesn't important for me.
Code snippet look like this:
class PerMinExample
{
private static PerformanceCounter _pcPerSec;
private static PerformanceCounter _pcPerMin;
private static Timer _timer = new Timer(CallBack, null, TimeSpan.FromSeconds(1), TimeSpan.FromSeconds(1));
private static Queue<CounterSample> _queue = new Queue<CounterSample>();
static PerMinExample()
{
// RateOfCountsPerSecond32
_pcPerSec = new PerformanceCounter("Category", "ORDERS PER SECOND", false);
// NumberOfItems32
_pcPerMin = new PerformanceCounter("Category", "ORDERS PER MINUTE", false);
_pcPerSec.RawValue = 0;
_pcPerMin.RawValue = 0;
}
public void CountSomething()
{
_pcPerSec.Increment();
}
private static void CallBack(Object o)
{
CounterSample sample = _pcPerSec.NextSample();
_queue.Enqueue(sample);
if (_queue.Count <= 60)
return;
CounterSample prev = _queue.Dequeue();
Single numerator = (Single)sample.RawValue - (Single)prev.RawValue;
Single denomenator =
(Single)(sample.TimeStamp - prev.TimeStamp)
/ (Single)(sample.SystemFrequency) / 60;
Single counterValue = numerator / denomenator;
_pcPerMin.RawValue = (Int32)Math.Ceiling(counterValue);
Console.WriteLine("ORDERS PER SEC: {0}", _pcPerSec.NextValue());
Console.WriteLine("ORDERS PER MINUTE: {0}", _pcPerMin.NextValue());
}
}