Colliding an object - camera

I'm taking my first steps through Unity, as a follow up to this question:
http://answers.unity3d.com/questions/56697/isometric-game-camera-limits
I now realized that I don't know how to make a collider actually collide. Now I have a GameObject I move around instead of the camera, and the camera is a child to that object. It has a box collider, and there're four other box colliders around the level so it will collide against them.. and it's not working, of course, because I was changing the position variable by hand. What do I do so this collides? use a rigidBody and apply forces to it? is there a way to put a maxVelocity on it? I can't see one, besides a rigidbody seems sort of overkill for what I'm trying to do. Otherwise I guess I just put mass 1, and export drag and force, but I'd much rather work with a maxSpeed, because drag will also affect the acceleration rate

I don't know if you have missed any steps but I can tell you what I've done if it helps any. Create a game object. Click on the object in the hierarchy and go to the top menu and component->physics->box collider.After you add the box collider you may have to adjust the size of the colliders as well. In addition make sure you character you are walking around with also has a collider.

You can take a programmatic approach. Do I get you right that you want to drag the cam around with the mouse or move it with keys? You can check the camera position by hand and apply boundings that way. It is quite easy to implement if your camera is locked into a single box.

Related

How can I modify the Lyra camera to make a Top Down game in UE5?

I'm trying to make a Battlerite like game. Lyra comes with CM_ArenaFramingCamera which produces a fixed camera that doesn't track the player. I don't really understand how this is achieved, and I'm not sure how I could modify it to follow the player. Where would something like that happen and how would it play into the Ability system?
I've also tried looking at the CM_ThirdPerson camera to modify it to function as a top down camera. I understand how to change the curves to position the camera, but there's a lot of weird behavior associated with this and I'm unclear of how to fix the rotation.
I'd appreciate anybody that has a deep dive on using the Lyra system to modify and create custom cameras!
To get the same effect as the SpringArmComponent what I basically dis was to create a new class inheriting from ULyraCameraMode and copy paste all the methods from SpringArmComponent. The tick component I replaced it with the method ULyraCameraMode::UpdateView. The most important modification you have to do is inside the method UpdateDesiredArmLocation(), here at very end of all the calculations just set.
View.Location = ResultLoc;
View.Rotation = DesiredRot;

Where in Blender source code is code that draws transform rotate/resize black arrow visual indicators?

I've been trying to find the piece of code that draws or initiates drawing of the double black arrow visual indicators that show up when transform rotate is executed by pressing R key (or resize with S key), visible here:
I've been stepping trough the code of the Rotate operator, various drawing functions etc., with no success. I suppose I do not have a good enough picture of the code structure.
I would appreciate it very much if someone could point me into the right direction.
Does someone know at least the right terminology to look for?
I'm using Blender 2.76 but I suppose insight into any version would be helpful.
(What I'm trying to do is to locate the point in code where decision is made whether to draw the indicator or not. I explained the "problem" in this question. The goal is to get it show always.)
I have finally found the place, not by stepping through the code but by browsing it, lol!
The function that draws the indicators is drawHelpline() and the check for the region being 'WINDOW' is done in helpline_poll(), both from transform.c file.
Actual decision is made in wm_paintcursor_draw() from wm_draw.c file which calls the helpline_poll() indirectly with pc->poll(C).
The wm_paintcursor_draw() is called by wm_method_draw_triple() which in turn is called from wm_draw_update() which is called from WM_main().
That answers my question.
However, that does not solve my actual problem because the active subwindow in these functions is the region from which the operator was executed - in my case the ToolShelf! It is because cursor_warp(), which I use to move the mouse in my operator, changes only the mouse pointer position and does not update anything else (i.e. does not update the active subwindow).
So, if I force helpline_poll() to return 1, it will draw the indicator only over the ToolShelf.
The solution is to hack WM_cursor_warp() from wm_window.c to set win->screen->subwinactive to the correct window id, but that is really an ugly hack and not directly related to the question I asked here.
The solution is to use modal timer operator to allow Blender to update the active subwindow, explained here.

General considerations for NUI/touch interface

For the past few months I've been looking into developing a Kinect based multitouch interface for a variety of software music synthesizers.
The overall strategy I've come up with is to create objects, either programatically or (if possible) algorithmically to represent various controls of the soft synth. These should have;
X position
Y position
Height
Width
MIDI output channel
MIDI data scaler (convert x-y coords to midi values)
2 strategies I've considered for agorithmic creation are XML description and somehow pulling stuff right off the screen (ie given a running program, find xycoords of all controls). I have no idea how to go about that second one, which is why I express it in such specific technical language ;). I could do some intermediate solution, like using mouse clicks on the corners of controls to generate an xml file. Another thing I could do, that I've seen frequently in flash apps, is to put the screen size into a variable and use math to build all interface objects in terms of screen size. Note that it isn't strictly necessary to make the objects the same size as onscreen controls, or to represent all onscreen objects (some are just indicators, not interactive controls)
Other considerations;
Given (for now) two sets of X/Y coords as input (left and right hands), what is my best option for using them? My first instinct is/was to create some kind of focus test, where if the x/y coords fall within the interface object's bounds that object becomes active, and then becomes inactive if they fall outside some other smaller bounds for some period of time. The cheap solution I found was to use the left hand as the pointer/selector and the right as a controller, but it seems like I can do more. I have a few gesture solutions (hidden markov chains) I could screw around with. Not that they'd be easy to get to work, exactly, but it's something I could see myself doing given sufficient incentive.
So, to summarize, the problem is
represent the interface (necessary because the default interface always expects mouse input)
select a control
manipulate it using two sets of x/y coords (rotary/continuous controller) or, in the case of switches, preferrably use a gesture to switch it without giving/taking focus.
Any comments, especially from people who have worked/are working in multitouch io/NUI, are greatly appreciated. Links to existing projects and/or some good reading material (books, sites, etc) would be a big help.
Woah lots of stuff here. I worked on lots of NUI stuff during my at Microsoft so let's see what we can do...
But first, I need to get this pet peeve out of the way: You say "Kinect based multitouch". That's just wrong. Kinect inherently has nothing to do with touch (which is why you have the "select a control" challenge). The types of UI consideration needed for touch, body tracking, and mouse are totally different. For example, in touch UI you have to be very careful about resizing things based on screen size/resolution/DPI... regardless of the screen, fingers are always the same physical size and people have the same degreee of physical accuracy so you want your buttons and similar controls to always be roughly the same physical size. Research has found 3/4 of an inch to be the sweet spot for touchscreen buttons. This isn't so much of a concern with Kinect though since you aren't directly touching anything - accuracy is dictated not by finger size but by sensor accuracy and users ability to precisely control finicky & lagging virtual cursors.
If you spend time playing with Kinect games, it quickly becomes clear that there are 4 interaction paradigms.
1) Pose-based commands. User strikes and holds a pose to invoke some application-wide or command (usually brining up a menu)
2) Hover buttons. User moves a virtual cursor over a button and holds still for a certain period of time to select the button
3) Swipe-based navigation and selection. User waves their hands in one direction to scroll and list and another direction to select from the list
4) Voice commands. User just speaks a command.
There are other mouse-like ideas that have been tried by hobbyists (havent seen these in an actual game) but frankly they suck: 1) using one hand for cursor and another hand to "click" where the cursor is or 2) using z-coordinate of the hand to determine whether to "click"
It's not clear to me whether you are asking about how to make some existing mouse widgets work with Kinect. If so, there are some projects on the web that will show you how to control the mouse with Kinect input but that's lame. It may sound super cool but you're really not at all taking advantage of what the device does best.
If I was building a music synthesizer, I would focus on approach #3 - swiping. Something like Dance Central. On the left side of the screen show a list of your MIDI controllers with some small visual indication of their status. Let the user swipe their left hand to scroll through and select a controller from this list. On the right side of the screen show how you are tracking the users right hand within some plane in front of their body. Now you're letting them use both hands at the same time, giving immediate visual feedback of how each hand is being interpretted, and not requiring them to be super precise.
ps... I'd also like to give a shout out to Josh Blake's upcomming NUI book. It's good stuff. If you really want to master this area, go order a copy :) http://www.manning.com/blake/

How to code a random movement in limited area

I have a limited area (screen) populated with a few moving objects (3-20 of them, so it's not like 10.000 :). Those objects should be moving with a constant speed and into random direction. But, there are a few limitation to it:
objects shouldn't exit the area - so if it's close to the edge, it should move away from it
objects shouldn't bump onto each other - so when one is close to another one it should move away (but not get too close to different one).
On the image below I have marked the allowed moves in this situation - for example object D shouldn't move straight up, as it would bring it to the "wall".
What I would like to have is a way to move them (one by one). Is there any simple way to achieve it, without too much calculations?
The density of objects in the area would be rather low.
There are a number of ways you might programmatically enforce your desired behavior, given that you have such a small number of objects. However, I'm going to suggest something slightly different.
What if you ran the whole thing as a physics simulation? For instance, you could set up a Box2D world with no gravity, no friction, and perfectly elastic collisions. You could model your enclosed region and populate it with objects that are proportionally larger than their on-screen counterparts so that the on-screen versions never get too close to each other (because the underlying objects in the physics simulation will collide and change direction before that can happen), and assign each object a random initial position and velocity.
Then all you have to do is step the physics simulation, and map its current state into your UI. All the tricky stuff is handled for you, and the result will probably be more believable/realistic than what you would get by trying to come up with your own movement algorithm (or if you wanted it to appear more random and less believable, you could also just periodically apply a random impulse to a random object to keep things changing unpredictably).
You can use the hitTest: method of UIView
UIView* touchedView=[self.superview hitTest:currentOrigin withEvent:nil];
In This method you have to pass the current origin of the ball and in second argument you can pass nil.
that method will return the view with which the ball is hited.
If there is any hit view you just change the direction of the ball.
for border you can set the condition for the frame of the ball if the ball go out of the boundary just change the direction of the ball.

I want to animate the movement of a foreign OS X app's window

Background: I recently got two monitors and want a way to move the focused window to the other screen and vice versa. I've achieved this by using the Accessibility API. (Specifically, I get an AXUIElementRef that holds the AXUIElement associated with the focused window, then I set the NSAccessibilityPositionAttribute value to move the window.
I have this working almost exactly the way I want it to, except I want to animate the movement of windows. I thought that if I could get the NSWindow somehow, I could get its layer and use CoreAnimation to animate the window movement.
Unfortunately, I found out that this isn't possible. (Correct me I'm wrong though -- if there's a way to do it this way it'd be great!) So I'm asking you all for help. How should I go about animating the movement of the focused window, if I have access to the AXUIElementRef?
-R
--EDIT
I was able to get a crude animation going by creating a while loop and moving the position of the window by a small amount each time to make a successful animation. However, the results are pretty sub-par. As you can guess, it takes a lot of unnecessary processing power, and is still very choppy. There must be a better way.
The best possible way I can imagine would be to perform some hacky property comparison between the AXUIElement info values for the window and the info returned from the CGWindow api. Once you're able to ascertain what windows in the CGWindow API match AXUIElementRefs, you could grab bitmaps of the current window contents, overlay the screen with your own custom animation draw of the faux windows, then as you drop the overlay set the real AXUIElementRef's to the desired-end-animation positions.
Hacky, tho.