Problem is:
I have created terrain and I need to fly over terrain with Camera. I added to Camera "Mouse Look" script, RigidBody: usegravity - unchecked and I have added my code in Update method:
float vert = Input.GetAxis("Vertical");
float hor = Input.GetAxis("Horizontal");
if (vert != 0)
{
if (!Physics.Raycast(this.transform.position, this.transform.forward, 5))
{
transform.Translate(Vector3.forward * flySpeed * vert);
}
else
{
transform.Translate(Vector3.up * flySpeed * vert);
}
}
if (hor != 0)
{
if (!Physics.Raycast(this.transform.position, this.transform.forward, 5))
{
transform.Translate(Vector3.right * flySpeed * hor);
}
else
{
transform.Translate(Vector3.up * flySpeed* hor);
}
}
if (Input.GetKey(KeyCode.E))
{
transform.Translate(Vector3.up * flySpeed);
}
else if (Input.GetKey(KeyCode.Q))
{
Vector3 v = Vector3.down * flySpeed;
if (!Physics.Raycast(this.transform.position, this.transform.forward, 5))
{
transform.Translate(v);
}
}
But sometimes then i go down - Q - camera goes through terrain. Why?
Also looks ugly if you are moving with camera forward as low as possible over terrain and camera does not fall through it - it starts to jump. Also why?
Make sure you have a Terrain Collider on your terrain.
In addition to S.Richmonds answer, you can add a character controller or other similar collider-component object to your camera.
See this answer in the unity questions network:
http://answers.unity3d.com/questions/45763/getting-camera-to-not-see-under-the-ground.html
The Update() method in a monobehavior gets called once each fram. Because the rate which update is called is dependent on frame rate, moving an object by a constant value in Update() can result in inconsistant motion. This can be corrected by multiplying a constant speed by Time.deltaTime, which is the time in seconds since the last frame was rendered. This will fix the fallthrough unless flySpeed is set too high (where the change in position each frame is greater than the collider's size). Additionally as suggested above, using a CharacterController without a rigidbody would be better suited to this situation. Rigidbodies are for objects primarily controlled by physics, while the CharacterController is for objects controlled by scripts.
Related
I acknowledge that I have used the sample codes that #Benjamin used in different examples.
I want to have both object detection numbers and accident numbers in my model. I need a code to detect object. But object detection does not necessarily lead to an accident. When an object is detected the agent(transporter) should either stop or change its route. The following code is about this functionality. field of view is a polygonal in front of the transporter.
for (Worker thisPed: main.worker) {
//for each pedestrain in model
double pedX = thisPed.getX() -getX();
double pedY = thisPed.getY() -getY();
if (fieldOfView.contains(pedX, pedY)) {
v_pedInDanger = true;
setSpeed(0);
break;
}
}
How to tell transporter to change route instead of stop? I could not find a code in this regard.
However, I should use another code to calculate distance between transporter and the detected object and if the distance <= 1 METER then we count it as an accident. like the following:
for (Worker ped: main.worker){
double dist = 0;
dist = distanceTo(ped);
if (dist <= 1){
v_pedCollisionNumber += 1;
ped.v_isWorkerCollide = true;
send ("accident", this);
}
}
the second one does not work.
Any Advise please? Any better approach?
Could you please advise how i would go about using the input touch function in Unity to make an object changes its x direction every time the user tap on the screen. For example, for 2d setting game, an object is moving forward (to the right) in the x position, if the user tap then the object would move backward in the x position (to the left). Sorry no code is produced.
It's simple as your name "tony" :)
What you can do is to make a simple script which'd move your object to left and right. And on screen touch you can easily change its direction by just -1 multiplication.
Simple script that you can attach to your object.
using UnityEngine;
using System.Collections;
public class MoveObject : MonoBehaviour
{
float _limit = 5;
// 1 for right and -1 for left.
float _direction = 1;
// You can call it as speed
float _speed = 0.01f;
void Start ()
{
}
void Update ()
{
transform.position = Vector3.MoveTowards (transform.position, new Vector3 (transform.position.x + _direction, transform.position.y, transform.position.z), _speed);
if (Input.GetMouseButtonDown (0))
_direction *= -1;
}
}
Hope this helps :)
I am currently programming with the Microsoft Kinect for Windows SDK 2 on Windows 8.1. Things are going well, and in a home dev environment obviously there is not much noise in the background compared to the 'real world'.
I would like to seek some advice from those with experience in 'real world' applications with the Kinect. How does Kinect (especially v2) fare in a live environment with passers-by, onlookers and unexpected objects in the background? I do expect, in the space from the Kinect sensor to the user there will usually not be interference however - what I am very mindful of right now is the background noise as such.
While I am aware that the Kinect does not track well under direct sunlight (either on the sensor or the user) - are there certain lighting conditions or other external factors I need to factor into the code?
The answer I am looking for is:
What kind of issues can arise in a live environment?
How did you code or work your way around it?
Outlaw Lemur has descibed in detail most of the issues you may encounter in real-world scenarios.
Using Kinect for Windows version 2, you do not need to adjust the motor, since there is no motor and the sensor has a larger field of view. This will make your life much easier.
I would like to add the following tips and advice:
1) Avoid direct light (physical or internal lighting)
Kinect has an infrared sensor that might be confused. This sensor should not have direct contact with any light sources. You can emulate such an environment at your home/office by playing with an ordinary laser pointer and torches.
2) If you are tracking only one person, select the closest tracked user
If your app only needs one player, that player needs to be a) fully tracked and b) closer to the sensor than the others. It's an easy way to make participants understand who is tracked without making your UI more complex.
public static Body Default(this IEnumerable<Body> bodies)
{
Body result = null;
double closestBodyDistance = double.MaxValue;
foreach (var body in bodies)
{
if (body.IsTracked)
{
var position = body.Joints[JointType.SpineBase].Position;
var distance = position.Length();
if (result == null || distance < closestBodyDistance)
{
result = body;
closestBodyDistance = distance;
}
}
}
return result;
}
3) Use the tracking IDs to distinguish different players
Each player has a TrackingID property. Use that property when players interfere or move at random positions. Do not use that property as an alternative to face recognition though.
ulong _trackinfID1 = 0;
ulong _trackingID2 = 0;
void BodyReader_FrameArrived(object sender, BodyFrameArrivedEventArgs e)
{
using (var frame = e.FrameReference.AcquireFrame())
{
if (frame != null)
{
frame.GetAndRefreshBodyData(_bodies);
var bodies = _bodies.Where(b => b.IsTracked).ToList();
if (bodies != null && bodies.Count >= 2 && _trackinfID1 == 0 && _trackingID2 == 0)
{
_trackinfID1 = bodies[0].TrackingId;
_trackingID2 = bodies[1].TrackingId;
// Alternatively, specidy body1 and body2 according to their distance from the sensor.
}
Body first = bodies.Where(b => b.TrackingId == _trackinfID1).FirstOrDefault();
Body second = bodies.Where(b => b.TrackingId == _trackingID2).FirstOrDefault();
if (first != null)
{
// Do something...
}
if (second != null)
{
// Do something...
}
}
}
}
4) Display warnings when a player is too far or too close to the sensor.
To achieve higher accuracy, players need to stand at a specific distance: not too far or too close to the sensor. Here's how to check this:
const double MIN_DISTANCE = 1.0; // in meters
const double MAX_DISTANCE = 4.0; // in meters
double distance = body.Joints[JointType.SpineBase].Position.Z; // in meters, too
if (distance > MAX_DISTANCE)
{
// Prompt the player to move closer.
}
else if (distance < MIN_DISTANCE)
{
// Prompt the player to move farther.
}
else
{
// Player is in the right distance.
}
5) Always know when a player entered or left the scene.
Vitruvius provides an easy way to understand when someone entered or left the scene.
Here is the source code and here is how to use it in your app:
UsersController userReporter = new UsersController();
userReporter.BodyEntered += UserReporter_BodyEntered;
userReporter.BodyLeft += UserReporter_BodyLeft;
userReporter.Start();
void UserReporter_BodyEntered(object sender, UsersControllerEventArgs e)
{
// A new user has entered the scene. Get the ID from e param.
}
void UserReporter_BodyLeft(object sender, UsersControllerEventArgs e)
{
// A user has left the scene. Get the ID from e param.
}
6) Have a visual clue of which player is tracked
If there are a lot of people surrounding the player, you may need to show on-screen who is tracked. You can highlight the depth frame bitmap or use Microsoft's Kinect Interactions.
This is an example of removing the background and keeping the player pixels only.
7) Avoid glossy floors
Some floors (bright, glossy) may mirror people and Kinect may confuse some of their joints (for example, Kinect may extend your legs to the reflected body). If you can't avoid glossy floors, use the FloorClipPlane property of your BodyFrame. However, the best solution would be to have a simple carpet where you expect people to stand. A carpet would also act as an indication of the proper distance, so you would provide a better user experience.
I created an application for home use like you have before, and then presented that same application in a public setting. The result was embarrassing for me, because there were many errors that I would never have anticipated within a controlled environment. However that did help me because it led me to add some interesting adjustments to my code, which is centered around human detection only.
Have conditions for checking the validity of a "human".
When I showed my application in the middle of a presentation floor with many other objects and props, I found that even chairs could be mistaken for people for brief moments, which led to my application switching between the user and an inanimate object, causing it to lose track of the user and lost their progress. To counter this or other false-positive human detections, I added my own additional checks for a human. My most successful method was comparing the proportions of a humans body. I implemented this measured in head units. (head units picture) Below is code of how I did this (SDK version 1.8, C#)
bool PersonDetected = false;
double[] humanRatios = { 1.0f, 4.0, 2.33, 3.0 };
/*Array indexes
* 0 - Head (shoulder to head)
* 1 - Leg length (foot to knee to hip)
* 2 - Width (shoulder to shoulder center to shoulder)
* 3 - Torso (hips to shoulder)
*/
....
double[] currentRatios = new double[4];
double headSize = Distance(skeletons[0].Joints[JointType.ShoulderCenter], skeletons[0].Joints[JointType.Head]);
currentRatios[0] = 1.0f;
currentRatios[1] = (Distance(skeletons[0].Joints[JointType.FootLeft], skeletons[0].Joints[JointType.KneeLeft]) + Distance(skeletons[0].Joints[JointType.KneeLeft], skeletons[0].Joints[JointType.HipLeft])) / headSize;
currentRatios[2] = (Distance(skeletons[0].Joints[JointType.ShoulderLeft], skeletons[0].Joints[JointType.ShoulderCenter]) + Distance(skeletons[0].Joints[JointType.ShoulderCenter], skeletons[0].Joints[JointType.ShoulderRight])) / headSize;
currentRatios[3] = Distance(skeletons[0].Joints[JointType.HipCenter], skeletons[0].Joints[JointType.ShoulderCenter]) / headSize;
int correctProportions = 0;
for (int i = 1; i < currentRatios.Length; i++)
{
diff = currentRatios[i] - humanRatios[i];
if (abs(diff) <= MaximumDiff)//I used .2 for my MaximumDiff
correctProportions++;
}
if (correctProportions >= 2)
PersonDetected = true;
Another method I had success with was finding the average of the sum of the joints distance squared from one another. I found that non-human detections had more variable summed distances, whereas humans are more consistent. The average I learned using a single dimensional support vector machine (I found user's summed distances were generally less than 9)
//in AllFramesReady or SkeletalFrameReady
Skeleton data;
...
float lastPosX = 0; // trying to detect false-positives
float lastPosY = 0;
float lastPosZ = 0;
float diff = 0;
foreach (Joint joint in data.Joints)
{
//add the distance squared
diff += (joint.Position.X - lastPosX) * (joint.Position.X - lastPosX);
diff += (joint.Position.Y - lastPosY) * (joint.Position.Y - lastPosY);
diff += (joint.Position.Z - lastPosZ) * (joint.Position.Z - lastPosZ);
lastPosX = joint.Position.X;
lastPosY = joint.Position.Y;
lastPosZ = joint.Position.Z;
}
if (diff < 9)//this is what my svm learned
PersonDetected = true;
Use player IDs and indexes to remember who is who
This ties in with the previous issue, where if Kinect switched the two users that it was tracking to others, then my application would crash because of the sudden changes in data. To counter this, I would keep track of both each player's skeletal index and their player ID. To learn more about how I did this, see Kinect user Detection.
Add adjustable parameters to adopt to varying situations
Where I was presenting, the same tilt angle and other basic kinect parameters (like near-mode) did not work in the new environment. Let the user be able to adjust some of these parameters so they can get the best setup for the job.
Expect people to do stupid things
The next time I presented, I had adjustable tilt, and you can guess whether someone burned out the Kinect's motor. Anything that can be broken on Kinect, someone will break. Leaving a warning in your documentation will not be sufficient. You should add in cautionary checks on Kinect's hardware to make sure people don't get upset when they break something inadvertently. Here is some code checking whether the user has used the motor more than 20 times in two minutes.
int motorAdjustments = 0;
DateTime firstAdjustment;
...
//in motor adjustment code
if (motorAdjustments == 0)
firstAdjustment = DateTime.Now;
++motorAdjustments;
if (motorAdjustments < 20)
{
//adjust the tilt
}
else
{
DateTime timeCheck = firstAdjustment;
if (DateTime.Now > timeCheck.AddMinutes(2))
{
//reset all variables
motorAdjustments = 1;
firstAdjustment = DateTime.Now;
//adjust the tilt
}
}
I would note that all of these were issues for me with the first version of Kinect, and I don't know how many of them have been solved in the second version as I sadly haven't gotten my hands on one yet. However I would still implement some of these techniques if not back-up techniques because there will be exceptions, especially in computer vision.
I have a window, which I have to keep square. My code is
primaryStage.minHeightProperty().bind(scene.widthProperty());
primaryStage.minWidthProperty().bind(scene.heightProperty());
It does resize the square when extending but has problems when I try to make it smaller?
Sometimes one of the sides gets a little shorter or longer than the other one. Is there a fix for this? Did I do something wrong in the code I currently use?
This doesn't answer your question directly, but I think is actually better UX since you don't restrict the user. Instead of fixing the aspect ratio of the stage, consider using insets to pad the content you want to remain square:
private val DEF_PAD = 6.0
chart.paddingProperty.bind(boundValue(stage.widthProperty, stage.heightProperty) {
// Maintain a square plot with padding
val w = stage.getWidth
val h = stage.getHeight
val extra = DEF_PAD + 0.5 * Math.abs(w - h)
if (w > h) {
new Insets(DEF_PAD, extra, DEF_PAD, extra)
} else if (h > w) {
new Insets(extra, DEF_PAD, extra, DEF_PAD)
} else {
new Insets(DEF_PAD)
}
})
(Written in Scala. boundValue is just a utility that creates a binding with a varargs dependency list)
I'm using Cocos2d iPhone with Box2D to create a basic physics engine.
Occasionally the user is required to drag around a small box2D object.
Creation of touchjoints on small objects is a bit hit and miss, with the game engine seeing it as a tap on blank space as often as actually creating the appropriate touchjoint. In practice this means the user is constantly mashing their fingers against the screen in vain attempts to move a stubborn object. I want the game to select small objects easily without this 'hit and miss' effect.
I could create the small objects with larger sensors around them, but this is not ideal because objects above a certain size (around 40px diameter) don't need this extra layer of complexity; and the small objects are simply the big objects scaled down to size.
What are some strategies I could use to allow the user experience to be better when moving small objects?
Here's the AABB code in ccTouchBegan:
b2Vec2 locationWorld = b2Vec2(touchLocation.x/PTM_RATIO, touchLocation.y/PTM_RATIO);
b2AABB aabb;
b2Vec2 delta = b2Vec2(1.0/PTM_RATIO, 1.0/PTM_RATIO);
//Changing the 1.0 here to a larger value doesn't make any noticeable difference.
aabb.lowerBound = locationWorld - delta;
aabb.upperBound = locationWorld + delta;
SimpleQueryCallback callback(locationWorld);
world->QueryAABB(&callback, aabb);
if(callback.fixtureFound){
//dragging code, updating sprite location etc.
}
SimpleQueryCallback code:
class SimpleQueryCallback : public b2QueryCallback
{
public:
b2Vec2 pointToTest;
b2Fixture * fixtureFound;
SimpleQueryCallback(const b2Vec2& point) {
pointToTest = point;
fixtureFound = NULL;
}
bool ReportFixture(b2Fixture* fixture) {
b2Body* body = fixture->GetBody();
if (body->GetType() == b2_dynamicBody) {
if (fixture->TestPoint(pointToTest)) {
fixtureFound = fixture;
return false;
}
}
return true;
}
};
What about a minimum collision box for touches? Objects with less than 40px diameter use the 40px diameter, all larger objects use their actual diameter.
What I ended up doing - thanks to iforce2d, was change ReportFixture in SimpileQueryCallback to:
bool ReportFixture(b2Fixture* fixture) {
b2Body* body = fixture->GetBody();
if (body->GetType() == b2_dynamicBody) {
//if (fixture->TestPoint(pointToTest)) {
fixtureFound = fixture;
return true;
//}
}
return true;
}
And increase the delta to 10.0/PTM_RATIO.