I am looking for ideas regarding an optimal/minimal structure for the inner render loop in Dart 2, for a 2d game (if that part matters).
Clarification / Explanation: Every framework / language has an efficient way to:
1) Deal with time.
2) Render to the screen (via memory, a canvas, an image, or whatever).
For an example, here is someone that answered this for the C# language. Being new to Flutter / Dart, my first attempt (below), is failing to work and as of right now, I can not tell where the problem is.
I have searched high and low without finding any help on this, so if you can assist, you have my eternal gratitude.
There is a post on Reddit by ‘byu/inu-no-policemen’ (a bit old). I used this to start. I suspect that it is crushing the garbage collector or leaking memory.
This is what I have so far, but it crashes pretty quickly (at least in the debugger):
import 'dart:ui';
import 'dart:typed_data';
import 'dart:math' as math;
import 'dart:async';
main() async {
var deviceTransform = new Float64List(16)
..[0] = 1.0 // window.devicePixelRatio
..[5] = 1.0 // window.devicePixelRatio
..[10] = 1.0
..[15] = 1.0;
var previous = Duration.zero;
var initialSize = await Future<Size>(() {
if (window.physicalSize.isEmpty) {
var completer = Completer<Size>();
window.onMetricsChanged = () {
if (!window.physicalSize.isEmpty) {
completer.complete(window.physicalSize);
}
};
return completer.future;
}
return window.physicalSize;
});
var world = World(initialSize.width / 2, initialSize.height / 2);
window.onBeginFrame = (now) {
// we rebuild the screenRect here since it can change
var screenRect = Rect.fromLTWH(0.0, 0.0, window.physicalSize.width, window.physicalSize.height);
var recorder = PictureRecorder();
var canvas = Canvas(recorder, screenRect);
var delta = previous == Duration.zero ? Duration.zero : now - previous;
previous = now;
var t = delta.inMicroseconds / Duration.microsecondsPerSecond;
world.update(t);
world.render(t, canvas);
var builder = new SceneBuilder()
..pushTransform(deviceTransform)
..addPicture(Offset.zero, recorder.endRecording())
..pop();
window.render(builder.build());
window.scheduleFrame();
};
window.scheduleFrame();
window.onPointerDataPacket = (packet) {
var p = packet.data.first;
world.input(p.physicalX, p.physicalY);
};
}
class World {
static var _objectColor = Paint()..color = Color(0xa0a0a0ff);
static var _s = 200.0;
static var _obejectRect = Rect.fromLTWH(-_s / 2, -_s / 2, _s, _s);
static var _rotationsPerSecond = 0.25;
var _turn = 0.0;
double _x;
double _y;
World(this._x, this._y);
void input(double x, double y) { _x = x; _y = y; }
void update(double t) { _turn += t * _rotationsPerSecond; }
void render(double t, Canvas canvas) {
var tau = math.pi * 2;
canvas.translate(_x, _y);
canvas.rotate(tau * _turn);
canvas.drawRect(_obejectRect, _objectColor);
}
}
Well, after a month of beating my face against this, I finally figured out the right question and that got me to this:
Flutter Layers / Raw
// Copyright 2015 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// This example shows how to perform a simple animation using the raw interface
// to the engine.
import 'dart:math' as math;
import 'dart:typed_data';
import 'dart:ui' as ui;
void beginFrame(Duration timeStamp) {
// The timeStamp argument to beginFrame indicates the timing information we
// should use to clock our animations. It's important to use timeStamp rather
// than reading the system time because we want all the parts of the system to
// coordinate the timings of their animations. If each component read the
// system clock independently, the animations that we processed later would be
// slightly ahead of the animations we processed earlier.
// PAINT
final ui.Rect paintBounds = ui.Offset.zero & (ui.window.physicalSize / ui.window.devicePixelRatio);
final ui.PictureRecorder recorder = ui.PictureRecorder();
final ui.Canvas canvas = ui.Canvas(recorder, paintBounds);
canvas.translate(paintBounds.width / 2.0, paintBounds.height / 2.0);
// Here we determine the rotation according to the timeStamp given to us by
// the engine.
final double t = timeStamp.inMicroseconds / Duration.microsecondsPerMillisecond / 1800.0;
canvas.rotate(math.pi * (t % 1.0));
canvas.drawRect(ui.Rect.fromLTRB(-100.0, -100.0, 100.0, 100.0),
ui.Paint()..color = const ui.Color.fromARGB(255, 0, 255, 0));
final ui.Picture picture = recorder.endRecording();
// COMPOSITE
final double devicePixelRatio = ui.window.devicePixelRatio;
final Float64List deviceTransform = Float64List(16)
..[0] = devicePixelRatio
..[5] = devicePixelRatio
..[10] = 1.0
..[15] = 1.0;
final ui.SceneBuilder sceneBuilder = ui.SceneBuilder()
..pushTransform(deviceTransform)
..addPicture(ui.Offset.zero, picture)
..pop();
ui.window.render(sceneBuilder.build());
// After rendering the current frame of the animation, we ask the engine to
// schedule another frame. The engine will call beginFrame again when its time
// to produce the next frame.
ui.window.scheduleFrame();
}
void main() {
ui.window.onBeginFrame = beginFrame;
ui.window.scheduleFrame();
}
Related
I am trying to recreate the mod in the YouTuber TommyInnit's video "Minecraft’s Laser Eye Mod Is Hilarious" as me and my friends wish to use it but we couldn't find it on the internet, and I have taken code from here for raycasting and also set up a keybind, but I cannot figure out how to setb the block you are looking at. I have tried to manage to get the block and set it but I can only find how to make new blocks that don't yet exist. My code is the following, with the block code being on line 142:
package net.laser.eyes;
import org.lwjgl.glfw.GLFW;
import net.minecraft.client.options.KeyBinding;
import net.minecraft.client.util.InputUtil;
import net.minecraft.text.LiteralText;
import net.fabricmc.api.ModInitializer;
import net.fabricmc.fabric.api.client.keybinding.v1.KeyBindingHelper;
import net.fabricmc.fabric.api.event.client.ClientTickCallback;
import net.fabricmc.api.*;
import net.fabricmc.fabric.api.client.rendering.v1.*;
import net.minecraft.block.*;
import net.minecraft.client.*;
import net.minecraft.client.gui.*;
import net.minecraft.client.util.math.*;
import net.minecraft.entity.*;
import net.minecraft.entity.decoration.*;
import net.minecraft.entity.projectile.*;
import net.minecraft.text.*;
import net.minecraft.util.hit.*;
import net.minecraft.util.math.*;
import net.minecraft.world.*;
public class main implements ModInitializer {
#Override
public void onInitialize() {
KeyBinding binding1 = KeyBindingHelper.registerKeyBinding(new KeyBinding("key.laser-eyes.shoot", InputUtil.Type.KEYSYM, GLFW.GLFW_KEY_R, "key.category.laser.eyes"));
HudRenderCallback.EVENT.register(main::displayBoundingBox);
ClientTickCallback.EVENT.register(client -> {
while (binding1.wasPressed()) {
client.player.sendMessage(new LiteralText("Key 1 was pressed!"), false);
}
});
}
private static long lastCalculationTime = 0;
private static boolean lastCalculationExists = false;
private static int lastCalculationMinX = 0;
private static int lastCalculationMinY = 0;
private static int lastCalculationWidth = 0;
private static int lastCalculationHeight = 0;
private static void displayBoundingBox(MatrixStack matrixStack, float tickDelta) {
long currentTime = System.currentTimeMillis();
if(lastCalculationExists && currentTime - lastCalculationTime < 1000/45) {
drawHollowFill(matrixStack, lastCalculationMinX, lastCalculationMinY,
lastCalculationWidth, lastCalculationHeight, 2, 0xffff0000);
return;
}
lastCalculationTime = currentTime;
MinecraftClient client = MinecraftClient.getInstance();
int width = client.getWindow().getScaledWidth();
int height = client.getWindow().getScaledHeight();
Vec3d cameraDirection = client.cameraEntity.getRotationVec(tickDelta);
double fov = client.options.fov;
double angleSize = fov/height;
Vector3f verticalRotationAxis = new Vector3f(cameraDirection);
verticalRotationAxis.cross(Vector3f.POSITIVE_Y);
if(!verticalRotationAxis.normalize()) {
lastCalculationExists = false;
return;
}
Vector3f horizontalRotationAxis = new Vector3f(cameraDirection);
horizontalRotationAxis.cross(verticalRotationAxis);
horizontalRotationAxis.normalize();
verticalRotationAxis = new Vector3f(cameraDirection);
verticalRotationAxis.cross(horizontalRotationAxis);
HitResult hit = client.crosshairTarget;
if (hit.getType() == HitResult.Type.MISS) {
lastCalculationExists = false;
return;
}
int minX = width;
int maxX = 0;
int minY = height;
int maxY = 0;
for(int y = 0; y < height; y +=2) {
for(int x = 0; x < width; x+=2) {
if(minX < x && x < maxX && minY < y && y < maxY) {
continue;
}
Vec3d direction = map(
(float) angleSize,
cameraDirection,
horizontalRotationAxis,
verticalRotationAxis,
x,
y,
width,
height
);
HitResult nextHit = rayTraceInDirection(client, tickDelta, direction);//TODO make less expensive
if(nextHit == null) {
continue;
}
if(nextHit.getType() == HitResult.Type.MISS) {
continue;
}
if(nextHit.getType() != hit.getType()) {
continue;
}
if (nextHit.getType() == HitResult.Type.BLOCK) {
if(!((BlockHitResult) nextHit).getBlockPos().equals(((BlockHitResult) hit).getBlockPos())) {
continue;
}
} else if(nextHit.getType() == HitResult.Type.ENTITY) {
if(!((EntityHitResult) nextHit).getEntity().equals(((EntityHitResult) hit).getEntity())) {
continue;
}
}
if(minX > x) minX = x;
if(minY > y) minY = y;
if(maxX < x) maxX = x;
if(maxY < y) maxY = y;
}
}
lastCalculationExists = true;
lastCalculationMinX = minX;
lastCalculationMinY = minY;
lastCalculationWidth = maxX - minX;
lastCalculationHeight = maxY - minY;
drawHollowFill(matrixStack, minX, minY, maxX - minX, maxY - minY, 2, 0xffff0000);
LiteralText text = new LiteralText("Bounding " + minX + " " + minY + " " + width + " " + height + ": ");
client.player.sendMessage(text.append(getLabel(hit)), true);
//SET THE BLOCK (maybe use hit.getPos(); to find it??)
}
private static void drawHollowFill(MatrixStack matrixStack, int x, int y, int width, int height, int stroke, int color) {
matrixStack.push();
matrixStack.translate(x-stroke, y-stroke, 0);
width += stroke *2;
height += stroke *2;
DrawableHelper.fill(matrixStack, 0, 0, width, stroke, color);
DrawableHelper.fill(matrixStack, width - stroke, 0, width, height, color);
DrawableHelper.fill(matrixStack, 0, height - stroke, width, height, color);
DrawableHelper.fill(matrixStack, 0, 0, stroke, height, color);
matrixStack.pop();
}
private static Text getLabel(HitResult hit) {
if(hit == null) return new LiteralText("null");
switch (hit.getType()) {
case BLOCK:
return getLabelBlock((BlockHitResult) hit);
case ENTITY:
return getLabelEntity((EntityHitResult) hit);
case MISS:
default:
return new LiteralText("null");
}
}
private static Text getLabelEntity(EntityHitResult hit) {
return hit.getEntity().getDisplayName();
}
private static Text getLabelBlock(BlockHitResult hit) {
BlockPos blockPos = hit.getBlockPos();
BlockState blockState = MinecraftClient.getInstance().world.getBlockState(blockPos);
Block block = blockState.getBlock();
return block.getName();
}
private static Vec3d map(float anglePerPixel, Vec3d center, Vector3f horizontalRotationAxis,
Vector3f verticalRotationAxis, int x, int y, int width, int height) {
float horizontalRotation = (x - width/2f) * anglePerPixel;
float verticalRotation = (y - height/2f) * anglePerPixel;
final Vector3f temp2 = new Vector3f(center);
temp2.rotate(verticalRotationAxis.getDegreesQuaternion(verticalRotation));
temp2.rotate(horizontalRotationAxis.getDegreesQuaternion(horizontalRotation));
return new Vec3d(temp2);
}
private static HitResult rayTraceInDirection(MinecraftClient client, float tickDelta, Vec3d direction) {
Entity entity = client.getCameraEntity();
if (entity == null || client.world == null) {
return null;
}
double reachDistance = 5.0F;
HitResult target = rayTrace(entity, reachDistance, tickDelta, false, direction);
boolean tooFar = false;
double extendedReach = 6.0D;
reachDistance = extendedReach;
Vec3d cameraPos = entity.getCameraPosVec(tickDelta);
extendedReach = extendedReach * extendedReach;
if (target != null) {
extendedReach = target.getPos().squaredDistanceTo(cameraPos);
}
Vec3d vec3d3 = cameraPos.add(direction.multiply(reachDistance));
Box box = entity
.getBoundingBox()
.stretch(entity.getRotationVec(1.0F).multiply(reachDistance))
.expand(1.0D, 1.0D, 1.0D);
EntityHitResult entityHitResult = ProjectileUtil.raycast(
entity,
cameraPos,
vec3d3,
box,
(entityx) -> !entityx.isSpectator() && entityx.collides(),
extendedReach
);
if (entityHitResult == null) {
return target;
}
Entity entity2 = entityHitResult.getEntity();
Vec3d hitPos = entityHitResult.getPos();
if (cameraPos.squaredDistanceTo(hitPos) < extendedReach || target == null) {
target = entityHitResult;
if (entity2 instanceof LivingEntity || entity2 instanceof ItemFrameEntity) {
client.targetedEntity = entity2;
}
}
return target;
}
private static HitResult rayTrace(
Entity entity,
double maxDistance,
float tickDelta,
boolean includeFluids,
Vec3d direction
) {
Vec3d end = entity.getCameraPosVec(tickDelta).add(direction.multiply(maxDistance));
return entity.world.raycast(new RaycastContext(
entity.getCameraPosVec(tickDelta),
end,
RaycastContext.ShapeType.OUTLINE,
includeFluids ? RaycastContext.FluidHandling.ANY : RaycastContext.FluidHandling.NONE,
entity
));
}
}
Firstly, I heavily recommend following the standard Java naming conventions as it will make your code more understandable for others.
The technical name for a block present in the world at a specific position is a "Block State", represented by the BlockState class.
You can only change the block state at a specific position on the server-side. Your raycasting code in ran on the client-side, so you need to use the Fabric Networking API. You can see the server-side Javadoc here and the client-side Javadoc here.
Thankfully, Fabric Wiki has a networking tutorial so you don't have to read all that Javadoc. The part that you're interested in is "sending packets to the server and receiving packets on the server".
Here's a guide specific for your use case:
Introduction to Networking
Minecraft operates in two different components; the client and the server.
The client is responsible for doing jobs such as rendering and GUIs, while the server is responsible for handling the world storage, entity AI etc. (talking about logical client and server here)
The physical server and physical client are the actual JAR files that are run.
The physical (dedicated) server contains only the logical server, while a physical client contains both a logical (integrated) server and a logical client.
A diagram that explains it can be found here.
So, the logical client cannot change the state of the logical server (e.g. block states in a world), so packets have to be sent from the client to the server in order for the server to respond.
The following code is only example code, and you shouldn't copy it! You should think about safety precautions like preventing cheat clients from changing every block. Probably one of the most important rules in networking: assume the client is lying.
The Fabric Networking API
Your starting points are ServerPlayNetworking and ClientPlayNetworking. They are the classes that help you send and receive packets.
Register a listener using registerGlobalReceiver, and send a packet by using send.
You first need an Identifier in order to separate your packet from other packets and make sure it is interpreted correctly. An Identifier like this is recommended to be put in a static field in your ModInitializer or a utility class.
public class MyMod implements ModInitializer {
public static final Identifier SET_BLOCK_PACKET = new Identifier("modid", "setblock");
}
(Don't forget to replace modid with your mod ID)
You usually want to pass data with your packets (e.g. block position and block to change to), and you can do so with a PacketByteBuf.
Let's Piece This all Together
So, we have an Identifier. Let's send a packet!
Client-Side
We will start by creating a PacketByteBuf and writing the correct data:
private static void displayBoundingBox(MatrixStack matrixStack, float tickDelta) {
// ...
PacketByteBuf data = PacketByteBufs.create();
buf.writeBlockPos(hit.getPos());
// Minecraft doesn't have a way to write a Block to a packet, so we will write the registry name instead
buf.writeIdentifier(new Identifier("minecraft", "someblock" /*for example, "stone"*/));
}
And now sending the packet
// ...
ClientPlayNetworking.send(SET_BLOCK_PACKET, buf);
Server-Side
A packet with the SET_BLOCK_PACKET ID has been sent, But we also need to listen and receive it on the server-side. We can do that by using ServerPlayNetworking.registerGlobalReceiver:
#Override
public void onInitialize() {
// ...
// This code MUST be in onInitialize
ServerPlayNetworking.registerGlobalReceiver(SET_BLOCK_PACKET, (server, player, handler, buf, sender) -> {
});
}
We are using a lambda expression here. For more info about lambdas, Google is your friend.
When receiving a packet, code inside your lambda will be executed on the network thread. This code is not allowed to modify anything related to in-game logic (i.e. the world). For that, we will use server.execute(Runnable).
You should read the buf on the network thread, though.
ServerPlayNetworking.registerGlobalReceiver(SET_BLOCK_PACKET, (server, player, handler, buf, sender) -> {
BlockPos pos = buf.readBlockPos(); // reads must be done in the same order
Block blockToSet = Registry.BLOCK.get(buf.readIdentifier()); // reading using the identifier
server.execute(() -> { // We are now on the main thread
// In a normal mod, checks will be done here to prevent the client from setting blocks outside the world etc. but this is only example code
player.getServerWorld().setBlockState(pos, blockToSet.getDefaultState()); // changing the block state
});
});
Once again, you should prevent the client from sending invalid locations
Hi i'm new to programming, and while i was trying to do an exercise this error poped up, and i have no idea what it means and how to fix it.
it's an example in the processing.js library that after i tried to copy didn't go so well.
Mover mover;
void setup(){
size (600,600);
mover = new Mover();
}
void draw(){
mover.update();
mover.display();
mover.checkEdges();
}
class Mover {
// position, velocity, and acceleration
PVector position;
PVector velocity;
PVector acceleration;
// Mass is tied to size
float mass;
Mover(float m, float x, float y) { //<<<the error occurs here
mass = m;
position = new PVector(x, y);
velocity = new PVector(0, 0);
acceleration = new PVector(0, 0);
}
// Newton's 2nd law: F = M * A
// or A = F / M
void applyForce(PVector force) {
// Divide by mass
PVector f = PVector.div(force, mass);
// Accumulate all forces in acceleration
acceleration.add(f);
}
void update() {
// Velocity changes according to acceleration
velocity.add(acceleration);
// position changes by velocity
position.add(velocity);
// We must clear acceleration each frame
acceleration.mult(0);
}
// Draw Mover
void display() {
stroke(255);
strokeWeight(2);
fill(255, 200);
ellipse(position.x, position.y, mass*16, mass*16);
}
// Bounce off bottom of window
void checkEdges() {
if (position.y > height) {
velocity.y *= -0.9; // A little dampening when hitting the bottom
position.y = height;
}
}
}
The only problem I see in your code is that your Mover constructor takes three arguments, but you're not giving it any in this line:
mover = new Mover();
So I'd fix that error before anything else. If you're still having problems, please be much more specific about exactly how you're compiling and running this code. Are you using the Processing editor? Which version? During exactly which step do you get an error?
For developing a side-scrolling platform 2D game I want to implement a moving camera class, the reason of using the class instead of moving the whole map is that I'll have to use too many objects at once witch will cause a lag. I cannot let that happen.
There's a nice algorithm for handling the camera, when player is moving further than the width of the screen then camera moves on players direction until he is once again in the middle of the screen, I've been working several days for making this algorithm work however there's been no success.
// Main
public class Camera
{
protected float _zoom;
protected Matrix _transform;
protected Matrix _inverseTransform;
//The zoom scalar (1.0f = 100% zoom level)
public float Zoom
{
get { return _zoom; }
set { _zoom = value; }
}
// Camera View Matrix Property
public Matrix Transform
{
get { return _transform; }
set { _transform = value; }
}
// Inverse of the view matrix,
// can be used to get
// objects screen coordinates
// from its object coordinates
public Matrix InverseTransform
{
get { return _inverseTransform; }
}
public Vector2 Pos;
// Constructor
public Camera()
{
_zoom = 2.4f;
Pos = new Vector2(0, 0);
}
// Update
public void Update(GameTime gameTime)
{
//Clamp zoom value
_zoom = MathHelper.Clamp(_zoom, 0.0f, 10.0f);
//Create view matrix
_transform = Matrix.CreateScale(new Vector3(_zoom, _zoom, 1)) *
Matrix.CreateTranslation(Pos.X, Pos.Y, 0);
//Update inverse matrix
_inverseTransform = Matrix.Invert(_transform);
}
}
This is the camera class I made for handling the screen, it's main purpose is to resize the screen, more precisely to zoom in and out whenever I want to change my screen, (Title screen, Playing screen, Game over, and like that.)
Moving the camera is quite simple with keys, like this.
if (keyState.IsKeyDown(Keys.D))
Cam.Pos.X -= 20;
if (keyState.IsKeyDown(Keys.A))
Cam.Pos.X += 20;
if (keyState.IsKeyDown(Keys.S))
Cam.Pos.Y -= 20;
if (keyState.IsKeyDown(Keys.W))
Cam.Pos.Y += 20;
And ofc. the drawing method witch apply the camera.
spriteBatch.Begin(SpriteSortMode.Texture, BlendState.AlphaBlend, null, null, null, null, Cam.Transform);
Here comes the part when I stop, so what I want to do is make something like 2 2D rooms. By Room I mean the place where I usually place objects. like this "Vector2(74, 63)" So I want to create a place where I could draw items that would stick to the screen and wouldn't move, and make the screen bounds that would make my algorithm to work, witch will be always on screen and as an addition it will check if one of the borders of the screen "room" reaches the certain coordinates of the map "room".
I think that the reason for that would be obvious because I don't want player to move camera outside the map when he reaches the wall, otherwise the player would already see a part of the next map where he will be transformed.
The reason of drawing both maps next to each other is again to reduce the loading time so player wouldn't have to wait for playing the next map.
Alright, so I've run into more troubles than I expected so I'll add extra information and will start with the player class:
// Main
public class Player
{
public Texture2D AureliusTexture;
public Vector2 position;
public Vector2 velocity;
public Vector2 PosForTheCam; // Variable that holds value for moving the camera
protected Vector2 dimensions;
protected CollisionPath attachedPath;
const float GRAVITY = 18.0f;
const float WALK_VELOCITY = 120f;
const float JUMP_VELOCITY = -425.0f;
// Constructor
public Player()
{
dimensions = new Vector2(23, 46);
position = new Vector2(50, 770);
}
public void Update(float deltaSeconds, List<CollisionPath> collisionPaths)
{
#region Input handling
KeyboardState keyState = Keyboard.GetState();
if (keyState.IsKeyDown(Keys.Left))
{
velocity.X = -WALK_VELOCITY;
}
else if (keyState.IsKeyDown(Keys.Right))
{
velocity.X = WALK_VELOCITY;
}
else
{
velocity.X = 0;
}
if (attachedPath != null && keyState.IsKeyDown(Keys.Space))
{
velocity.Y = JUMP_VELOCITY;
attachedPath = null;
}
velocity.Y += GRAVITY;
#endregion
#region Region of handling the camera based on Player
PosForTheCam.X = velocity.X;
#endregion
#region Collision checking
if (velocity.Y >= 0)
{
if (attachedPath != null)
{
position.X += velocity.X * deltaSeconds;
position.Y = attachedPath.InterpolateY(position.X) - dimensions.Y / 2;
velocity.Y = 0;
if (position.X < attachedPath.MinimumX || position.X > attachedPath.MaximumX)
{
attachedPath = null;
}
}
else
{
Vector2 footPosition = position + new Vector2(0, dimensions.Y / 2);
Vector2 expectedFootPosition = footPosition + velocity * deltaSeconds;
CollisionPath landablePath = null;
float landablePosition = float.MaxValue;
foreach (CollisionPath path in collisionPaths)
{
if (expectedFootPosition.X >= path.MinimumX && expectedFootPosition.X <= path.MaximumX)
{
float pathOldY = path.InterpolateY(footPosition.X);
float pathNewY = path.InterpolateY(expectedFootPosition.X);
if (footPosition.Y <= pathOldY && expectedFootPosition.Y >= pathNewY && pathNewY < landablePosition)
{
landablePath = path;
landablePosition = pathNewY;
}
}
}
if (landablePath != null)
{
velocity.Y = 0;
footPosition.Y = landablePosition;
attachedPath = landablePath;
position.X += velocity.X * deltaSeconds;
position.Y = footPosition.Y - dimensions.Y / 2;
}
else
{
position = position + velocity * deltaSeconds;
}
}
}
else
{
position += velocity * deltaSeconds;
attachedPath = null;
}
#endregion
}
}
So I state it clear that I asked my friend to do most of it because I wanted to handle the gravity and the slopes so we made it work similar like in Unity. And he happened to know how to do that.
And so I'll add the Update method that handles the camera from the Main Class.
MM.Update(gameTime); // Map Managher update function for map handling
Cam.Update(gameTime); // Camera update
Cam.Zoom = 2.4f; // Sets the zoom level for the title screen
// Takes the start position for camera in map and then turns off the update
// so the camera position can be changed. Else it would just keep an infinite
// loop and we couldn't change the camera.
if (StartInNewRoom)
{
Cam.Pos = MM.CameraPosition; // Applys the camera position value from the map manager class
StartInNewRoom = false;
}
I am unsure how to handle the camera, like I used your method and the result often ended up that camera moves by itself or it doesn't move at all.
If you don't want objects to move with the camera like a HUD you need a second spriteBatch.Begin() without your camera matrix which you draw after your actual scene.
To make the camera not move out of the map you could use some kind of collision detection. Just calculate the right border of your camera. It depends where the origin of your camera is.
Is your camera matrix working like this? Because the position should be negative or it will move in the wrong direction.
This is how mine looks like.
return Matrix.CreateTranslation(new Vector3(-camera.position.X, -camera.position.Y, 0)) *
Matrix.CreateRotationZ(Rotation) * Matrix.CreateScale(Zoom) *
Matrix.CreateTranslation(new Vector3(Viewport.Width * 0.5f, Viewport.Height * 0.5f, 0));
Viewport.Width/Height * 0.5 centers you camera.
You can also apply this behind your Pos.X/Y
To Camera follows player
public void Update(Player player)
{
//Clamp zoom value
_zoom = MathHelper.Clamp(_zoom, 0.0f, 10.0f);
//Create view matrix
_transform = Matrix.CreateScale(new Vector3(_zoom, _zoom, 1)) *
Matrix.CreateTranslation(player.Pos.X, player.Pos.Y, 0);
//Update inverse matrix
_inverseTransform = Matrix.Invert(_transform);
}
as basis for my GPS functionality I've taken HelloMap3D Example of Nutiteq (Thx Jaak) and I adapted to show my current GPS position light different of this example, so, no growing yelow circles but a fix blue translucent circle with a center point as my current Position and works fine except the update. It should erase the past position if location is changed, so that
this update happens as in the example in the method onLocationChanged
This is the code in my Main Activity
protected void initGps(final MyLocationCircle locationCircle) {
final Projection proj = mapView.getLayers().getBaseLayer().getProjection();
locationListener = new LocationListener() {
#Override
public void onLocationChanged(Location location) {
locationCircle.setLocation(proj, location);
locationCircle.setVisible(true);
}
// Another Methods...
}
}
I have adapted MyLocationCircle Class like this
public void update() {
//Draw center with a drawable
Bitmap bitmapPosition = BitmapFactory.decodeResource(activity.getResources(), R.drawable.ic_home);
PointStyle pointStyle = PointStyle.builder().setBitmap(bitmapPosition).setColor(Color.BLUE).build();
// Create/update Point
if ( point == null ) {
point = new Point(circlePos, null, pointStyle, null);
layer.add(point);
} else { // We just have to change the Position to actual Position
point.setMapPos(circlePos);
}
point.setVisible(visible);
// Build closed circle
circleVerts.clear();
for (float tsj = 0; tsj <= 360; tsj += 360 / NR_OF_CIRCLE_VERTS) {
MapPos mapPos = new MapPos(circleScale * Math.cos(tsj * Const.DEG_TO_RAD) + circlePos.x, circleScale * Math.sin(tsj * Const.DEG_TO_RAD) + circlePos.y);
circleVerts.add(mapPos);
}
// Create/update line
if (circle == null) {
LineStyle lineStyle = LineStyle.builder().setWidth(0.05f).setColor(Color.BLUE).build();
PolygonStyle polygonStyle = PolygonStyle.builder().setColor(Color.BLUE & 0x80FFFFFF).setLineStyle(lineStyle).build();//0xE0FFFF
circle = new Polygon(circleVerts, null, polygonStyle, circle_data);
layer.add(circle);
} else {
circle.setVertexList(circleVerts);
}
circle.setVisible(visible);
}
public void setVisible(boolean visible) {
this.visible = visible;
}
public void setLocation(Projection proj, Location location) {
circlePos = proj.fromWgs84(location.getLongitude(), location.getLatitude());
projectionScale = (float) proj.getBounds().getWidth();
circleRadius = location.getAccuracy();
// Here is the most important modification
update();
}
So, each time our Position changes is called onLocationChanged(Location location) Method and there will be called locationCircle.setLocation(location) and last there, it will be called update called.
The questions are, What am I making wrong? and How can I solve it?
Thank you in advance.
You create and add new circle with every update. You should reuse single one, just update vertexes with setVertexList(). In particular this line should be outside onLocationChanged cycle, somewhere in initGPS perhaps:
circle = new Polygon(circleVerts, null, polygonStyle, circle_data);
I am looking for a function to ensure that a given box or sphere will be visible in a WebGL canvas, and that it will fit the canvas area.
I am using a perspective camera and the camera already points to the middle of the object.
I understand that this could be achieved by changing the FOV angle or by moving the camera along the view axis.
Any idea how this could be achieved with ThreeJS ?
This is how I finally implemented it:
var camera = new THREE.PerspectiveCamera(35,1,1, 100000);
var controls = new THREE.TrackballControls( me.camera , container);
[...]
/**
* point the current camera to the center
* of the graphical object (zoom factor is not affected)
*
* the camera is moved in its x,z plane so that the orientation
* is not affected either
*/
function pointCameraTo (node) {
var me = this;
// Refocus camera to the center of the new object
var COG = shapeCenterOfGravity(node);
var v = new THREE.Vector3();
v.subVectors(COG,me.controls.target);
camera.position.addVectors(camera.position,v);
// retrieve camera orientation and pass it to trackball
camera.lookAt(COG);
controls.target.set( COG.x,COG.y,COG.z );
};
/**
* Zoom to object
*/
function zoomObject (node) {
var me = this;
var bbox = boundingBox(node);
if (bbox.empty()) {
return;
}
var COG = bbox.center();
pointCameraTo(node);
var sphereSize = bbox.size().length() * 0.5;
var distToCenter = sphereSize/Math.sin( Math.PI / 180.0 * me.camera.fov * 0.5);
// move the camera backward
var target = controls.target;
var vec = new THREE.Vector3();
vec.subVectors( camera.position, target );
vec.setLength( distToCenter );
camera.position.addVectors( vec , target );
camera.updateProjectionMatrix();
render3D();
};
An example of this can be seen in https://github.com/OpenWebCAD/node-occ-geomview/blob/master/client/geom_view.js
Here is how I did it { Using TrackBall to Zoomin/out pan etc }
function init(){
......
......
controls = new THREE.TrackballControls(camera);
controls.rotateSpeed = 1.0;
controls.zoomSpeed = 1.2;
controls.panSpeed = 0.8;
controls.noZoom = false;
controls.noPan = false;
controls.staticMoving = true;
controls.dynamicDampingFactor = 0.3;
controls.keys = [65, 83, 68];
controls.addEventListener('change', render);
.....
.....
render(scene, camera);
animate();
}
function render()
{
renderer.render(scene, camera);
//stats.update();
}
function animate() {
requestAnimationFrame(animate);
controls.update();
}