There is a problem that multiple users of my model have noticed, namely that when you right click the model (here), the movements are hypersensitive. Orbit and zoom are fine and steady, but pan now more often than not results on the model rapidly shooting off into the distance. I've been playing with the camera controls to no avail and I don't want to simply remove the pan option for the client.
Also, is there any way to transition between cameras without a fade, just a movement of the camera?
Also, Verold not working on Internet Explorer 11... any news?
Thanks
Solved: problem was the focal point (white lined sphere). Had been set off accidentally far off into the distance (can be easily done without noticing and there is no undo). Just brought it back to the object.
Related
I am trying to replicate a view from a phone (using DeviceOrientationControls) to a desktop (using TrackballControls). I am passing the view state (camera position & direction) through an intermediary server, and have that part mostly working.
I'm having trouble setting the camera rotation on the desktop. The cameras are sync'd to look at the same point, but the view on the desktop (receiving the view state from the phone) rotates around the view angle.
I definitely don't fully understand quaternions or rotation order. I've tried applying those, but clearly I'm out of my element. I guess I'm just looking for some hints on how to sync the camera rotation on the desktop.
Looks like I had a (trackball) controls.update() in my animate() that was blowing away the rotation that I was setting. Camera position and direction are not changed by this, but the rotation (or the "roll" of the camera) was.
In TrackballControls, it would be nice to have a setting for programmatically updating the camera's rotation that wouldn't get squashed by a call to rotateCamera(). I'll have to think about that, because it doesnt seem like it would be easy to implement.
I may be making this harder than what it really is, but I am also pretty new to developing games. Currently I am making a practice scene to get back used to the unity engine as I have not had time to use it since last summer. My issue is that I can not figure out how to lift the camera in game mode. Notice my photo below, and how much of the "underground" is showing. I want to raise the camera to keep it at the very least a specific y axis value, so that I can make less of the ground visible, and more of the background visible. If I am over complicating this, please also let me know. Thank you
If main camera is still then just lift the camera in scene view you can see changes in game view.
Or if camera moves with respect to player then you have to use a script and attach it to camera and get a reference of player transform in the script and according to the player position change position of the camera. Add an offset value in y component of the camera.
So, I have this code. It's a small 3D scene with a ground, a red box, a custom loaded building and a rotating "sun". I'm delegating camera navigation to OrbitControls script, as it fits the most the way I want the camera to behave, however, there is a little weird problem: after I zoom in into a 3D object within this scene, rotate a little, then zoom out to "leave" the object, the zoom out process takes a billion scrolls. It's a weird behavior and I'm sorry if I'm not clear enough; once I'm in I have to scroll like forever, and every frame it seems to move "out of" the object very slowly, like the camera state is somehow screwed up.
I'm sorry if this very question has been already asked, I looked for this issue and tried stuff from other topics that seemed the same, but it didn't work.
#Edit
Wow, something even weirder. I tested zooming in this example, indefinitely, then the zoom in started to grow VERY slowly (just like in my code). Am I misunderstanding something? It looks as if the amount of zoom-in's somehow blocked rendering or something.
WestLangley tip actually solved my problem. Setting minDistance prevented the camera to zoom in infinitely, despite the actual rendering only showing a small step into the scene.
I am having strange artifacts on a tiledmap while scrolling with the camera clamped on the player (who is a box2d-Body).
Before getting this issue i used the linear filter for the tiledmap which prevents those strange artifacts from happening but results in Texture bleeding (i loaded the tiledmap straight from a .tmx file without padding the tiles).
However now i am using the Nearest filter instead which gets rid of the bleeding but when scrolling the map (by walking the character with the cam clamped on him) it seams like a lot of pixel are flickering around. The flickering results can get better or worse depending on the cameras zoom value.
But when I use the "OrthoCamController" class from the libgdx-Utilities which allows to scroll the map by panning with the mouse/finger i don't get these artifacts at all.
I assume that the flickering might be caused by bad camera-position values received by the box2d-Body's position.
One more thing i should add here: The game instance runs in 1280*720 display mode while my gamecam renders only 800*480. Wen i change the gamecam's rendersolution to 1280*720 i don't get those artifacts but then the tiles are way too tiny.
Has anyone experienced this issue or knows how to fix that? :)
I had a similar problem with this, and found it was due to having too small a decimal value for the camera position.
I think what may be happening is some sort of rounding with certain tile columns/rows in the tilemap renderer.
I fixed this by rounding to a set accuracy, like so:
camera.position.x = Math.round(player.entity.getX() * scalePosition) / scalePosition;
Experiment with various values, but I got it working by using the tile size as the scalePosition value.
About tilesets, I posted a solution here: Getting gaps between tiled textures with libgdx
I've been using that method with Tiled itself. You will have to adjust "margin" and "spacing" when importing tilesets in Tiled to get the effect working.
It's 100% working for me :)
For the past few months I've been looking into developing a Kinect based multitouch interface for a variety of software music synthesizers.
The overall strategy I've come up with is to create objects, either programatically or (if possible) algorithmically to represent various controls of the soft synth. These should have;
X position
Y position
Height
Width
MIDI output channel
MIDI data scaler (convert x-y coords to midi values)
2 strategies I've considered for agorithmic creation are XML description and somehow pulling stuff right off the screen (ie given a running program, find xycoords of all controls). I have no idea how to go about that second one, which is why I express it in such specific technical language ;). I could do some intermediate solution, like using mouse clicks on the corners of controls to generate an xml file. Another thing I could do, that I've seen frequently in flash apps, is to put the screen size into a variable and use math to build all interface objects in terms of screen size. Note that it isn't strictly necessary to make the objects the same size as onscreen controls, or to represent all onscreen objects (some are just indicators, not interactive controls)
Other considerations;
Given (for now) two sets of X/Y coords as input (left and right hands), what is my best option for using them? My first instinct is/was to create some kind of focus test, where if the x/y coords fall within the interface object's bounds that object becomes active, and then becomes inactive if they fall outside some other smaller bounds for some period of time. The cheap solution I found was to use the left hand as the pointer/selector and the right as a controller, but it seems like I can do more. I have a few gesture solutions (hidden markov chains) I could screw around with. Not that they'd be easy to get to work, exactly, but it's something I could see myself doing given sufficient incentive.
So, to summarize, the problem is
represent the interface (necessary because the default interface always expects mouse input)
select a control
manipulate it using two sets of x/y coords (rotary/continuous controller) or, in the case of switches, preferrably use a gesture to switch it without giving/taking focus.
Any comments, especially from people who have worked/are working in multitouch io/NUI, are greatly appreciated. Links to existing projects and/or some good reading material (books, sites, etc) would be a big help.
Woah lots of stuff here. I worked on lots of NUI stuff during my at Microsoft so let's see what we can do...
But first, I need to get this pet peeve out of the way: You say "Kinect based multitouch". That's just wrong. Kinect inherently has nothing to do with touch (which is why you have the "select a control" challenge). The types of UI consideration needed for touch, body tracking, and mouse are totally different. For example, in touch UI you have to be very careful about resizing things based on screen size/resolution/DPI... regardless of the screen, fingers are always the same physical size and people have the same degreee of physical accuracy so you want your buttons and similar controls to always be roughly the same physical size. Research has found 3/4 of an inch to be the sweet spot for touchscreen buttons. This isn't so much of a concern with Kinect though since you aren't directly touching anything - accuracy is dictated not by finger size but by sensor accuracy and users ability to precisely control finicky & lagging virtual cursors.
If you spend time playing with Kinect games, it quickly becomes clear that there are 4 interaction paradigms.
1) Pose-based commands. User strikes and holds a pose to invoke some application-wide or command (usually brining up a menu)
2) Hover buttons. User moves a virtual cursor over a button and holds still for a certain period of time to select the button
3) Swipe-based navigation and selection. User waves their hands in one direction to scroll and list and another direction to select from the list
4) Voice commands. User just speaks a command.
There are other mouse-like ideas that have been tried by hobbyists (havent seen these in an actual game) but frankly they suck: 1) using one hand for cursor and another hand to "click" where the cursor is or 2) using z-coordinate of the hand to determine whether to "click"
It's not clear to me whether you are asking about how to make some existing mouse widgets work with Kinect. If so, there are some projects on the web that will show you how to control the mouse with Kinect input but that's lame. It may sound super cool but you're really not at all taking advantage of what the device does best.
If I was building a music synthesizer, I would focus on approach #3 - swiping. Something like Dance Central. On the left side of the screen show a list of your MIDI controllers with some small visual indication of their status. Let the user swipe their left hand to scroll through and select a controller from this list. On the right side of the screen show how you are tracking the users right hand within some plane in front of their body. Now you're letting them use both hands at the same time, giving immediate visual feedback of how each hand is being interpretted, and not requiring them to be super precise.
ps... I'd also like to give a shout out to Josh Blake's upcomming NUI book. It's good stuff. If you really want to master this area, go order a copy :) http://www.manning.com/blake/