encased physics objects in unity 5? - blender

I'm trying to set up a strange box in unity 3D. The problem is is that it blows apart when the game is run. I read somewhere it had something to do with rigid bodies overlapping, but I was not able to find a good resolution to the issue.
Below is a schematic of the box. The parts, labelled at the top, all have different component setups.
The handles (ha and hb) are parents to their boxhalves (ba and bb). The handles take care of gravity for the both of them by being a rigid body, and use a box collider. The boxes, because of their shape, uses a mesh collider.
The center piece (c) is parent to both of the stoppers (sa and sb). The center is the rigid body, while the stoppers use box colliders.
The idea was that the center piece would remain in the boxes, so if a player pulled one end of the box, that half would "extend" out to the stopper. When pushed back in, it would stop at the stopper. If two players help each end, they could both control the "stretch" of the box.
However, when running the game, the boxes immediately explode apart.
Any helpful advice would be wonderful!

Seems possible that you might be running into an issue with concave colliders. Maybe splitting up your mesh colliders into convex hulls would help. There's also this tool on the asset store that looks like it might have been designed for your situation.

Related

FontForge Missing Curve Handles?

I'm using FontForge v 20200314 on MacOS Catalina 10.15.4
I'm having an issue where curve points do not display handles. I have tried adding additional points to no avail. Some of my curve points do create handles and curve, but I can't discern a logic to it.
Granted, I've been using Adobe programs for 20 years, so I'm thinking I'm probably just having the wrong expectations of the tools. I attached a screen shot for reference. In it you can see that only some curve points have handles.
I have tried Point> Make Curve, but that causes the program to crash consistently. I have Show> Control Points Always selected.
Any advice is much appreciated.
Can't embed images, but it's a screenshot of the curves expecting different than expected
If the point is a corner or tangent, the one or both of the handles may disappear, but occasionally curves don't have any either. You should be able to drag one of the adjacent lines to add a control point on that side if nothing is selected (once something is selected this behavior changes). You can also select the point, right-click, and select a different type (possible multiple times), but this will usually change the handles and shape quite a bit.

Make your own mouse driver

I have a mouse mouse from speedlink that is able to do a lot of things, like changing the colours of the leds, but I can only do those things with the provided software from speedlink.
Is it possible to code your own software that controls the led lights of the mouse?
Yes, but you would have to have the hardware specifications to know what needs to be sent to the mouse to accept the commands you're looking for. Usually these things are not published or readily accessible.
I bought two Microsoft Basic Optical mice, identical to the one which performed the functions I needed really well. The first one would grab and flip the grid with object in one position within the grid with a right click of the Mouse. The Blender 2.79 3D modelling app is what I am using the mice for, I plugged the two new mice into two computers, to try them out, they would NOT do the grid grabbing and flipping, though they would perform the other functions. the grid grabbing function is important so that You can move the scene and inspect the model, or appear to walk around a solid object in the real world.

GameMaker Studio Physics Object Precise Collisions

Is it possible to have a physics object in GameMaker Studio use precise collisions?
Here's some context for my question. I'm making a pirate game where the player sails around a large ocean with a number of islands. I've been using the physics engine to control the movement of the ship, and that is working well. However, the problem arises when trying to introduce collisions between the ship and the various islands. As far as I can tell, the underlying physics fixtures can only be formed into fairly simple shapes. Specifically, the collision shape editor is limited to 12 points, and only convex shapes. This is a problem, because many of my islands are relatively complicated non-convex shapes, and aren't necessarily a single piece. It would be nice to be able to use the island sprite as a precise collision mask, as would be possible for non physics based objects.
Is there a way to do this, or a possible work-around that I'm missing? Here's an example of one of my islands:
I can see two solutions to your problem.
1 - The easiest, but performance-unfriendly.
In the sprite editor, click "Modify mask". There should be a "precise collision checking" box you can tick. This means that your sprite will be checked pixel by pixel for collisions. As you can guess, this is not performance friendly, but will do exactly what you want.
2 - The one I would recommend.
What you could do is just draw the island sprites, either through the background or via a dedicated object, and then create some simple shape objects (rectangle, circle and diamond), that would be invisible, and place them over your islands in the room editor. (Don't forget that you can stretch them).
These simple shaped objects would be the ones to check for collisions.
I used this technique make a hitbox for complex-shaped clouds in one of my games, so I know it works.
I believe that the island you show us can be fairly well covered with a few ovals and a long rectangle.
Bonus : after doing that graphically, you can copy the creation code of the shapes from the room create event to the island create event to repeat for multiple identical islands. Just don't forget the position/angle offset !
By using the Shape options when defining the collision shape, you can have any kind of Convex polygon as your collision shape. Example:
The spot where you choose the Shape option is in red.
After you select that option, you can just click & drag to add/edit a vertex to the polygon. Just bear in mind, it has to be a convex polygon, GameMaker is very strict about that. You can also remove vertices by right clicking on them.

Detect multiple bodies in Kinect?

I am working with kinect in openframework using the ofxKinect addon, which is great and plenty fun!
Anyway I am looking for some pointers or a direction when dealing with multiple bodies on the screen. I was thinking of making a rect around each detected body and when the rects intersect something could happen, an effect or anything.
So what I am looking for are ideas or something that could point me to the right direction of detecting multiple bodies when using a kinect.
Right now based on the depth image I get from the kinect I go through each pixel and create a bunch of smaller rectangles with a padding and group them in a larger rectangle bound if they are separate from another rectangle group. This is not ideal as it only deals with the pixel values and is not really seperating bodies from eachother and is not giving me the results I am looking for.
So any ideas would be greatly appreciated!
If you want to use ofxKinect a quick solution would be to threshold on depth and assume bodies and no other objects will be within a depth range. This should make it easy to use the OpenCV's contour finder to isolate the outlines of the bodies and get the bounding rectangles. If the rectangles intersect(and ofRectangle already does the math you), trigger the reaction you need. Also don't forget to do that once if the effect isn't showing already, otherwise you will trigger the effect multiple times per second while the two bodies' bounding rectangles intersect.
You could try something a bit more hardcore and using ofxCv(not just ofxOpenCV) to tap into the HoG functionality. This is slow in itself and not ideal with the depth map, but hopefully you can run in every few seconds just to detect a person and the depth, then keep tracking that movement.
Personally, if you want to track people with the Kinect I recommend using ofxOpenNI as if already provides the scene segmentation feature and even if you don't track the skeletons you can still get useful information like the pixels pertaining to each body and they're centre of mass. I'm guessing Microsoft KinectSDK has a similar feature and there should be an oF addon, but it's windows only.
ofxKinect/libfreenect does not offer any people detection features, so you will need to roll your own.

General considerations for NUI/touch interface

For the past few months I've been looking into developing a Kinect based multitouch interface for a variety of software music synthesizers.
The overall strategy I've come up with is to create objects, either programatically or (if possible) algorithmically to represent various controls of the soft synth. These should have;
X position
Y position
Height
Width
MIDI output channel
MIDI data scaler (convert x-y coords to midi values)
2 strategies I've considered for agorithmic creation are XML description and somehow pulling stuff right off the screen (ie given a running program, find xycoords of all controls). I have no idea how to go about that second one, which is why I express it in such specific technical language ;). I could do some intermediate solution, like using mouse clicks on the corners of controls to generate an xml file. Another thing I could do, that I've seen frequently in flash apps, is to put the screen size into a variable and use math to build all interface objects in terms of screen size. Note that it isn't strictly necessary to make the objects the same size as onscreen controls, or to represent all onscreen objects (some are just indicators, not interactive controls)
Other considerations;
Given (for now) two sets of X/Y coords as input (left and right hands), what is my best option for using them? My first instinct is/was to create some kind of focus test, where if the x/y coords fall within the interface object's bounds that object becomes active, and then becomes inactive if they fall outside some other smaller bounds for some period of time. The cheap solution I found was to use the left hand as the pointer/selector and the right as a controller, but it seems like I can do more. I have a few gesture solutions (hidden markov chains) I could screw around with. Not that they'd be easy to get to work, exactly, but it's something I could see myself doing given sufficient incentive.
So, to summarize, the problem is
represent the interface (necessary because the default interface always expects mouse input)
select a control
manipulate it using two sets of x/y coords (rotary/continuous controller) or, in the case of switches, preferrably use a gesture to switch it without giving/taking focus.
Any comments, especially from people who have worked/are working in multitouch io/NUI, are greatly appreciated. Links to existing projects and/or some good reading material (books, sites, etc) would be a big help.
Woah lots of stuff here. I worked on lots of NUI stuff during my at Microsoft so let's see what we can do...
But first, I need to get this pet peeve out of the way: You say "Kinect based multitouch". That's just wrong. Kinect inherently has nothing to do with touch (which is why you have the "select a control" challenge). The types of UI consideration needed for touch, body tracking, and mouse are totally different. For example, in touch UI you have to be very careful about resizing things based on screen size/resolution/DPI... regardless of the screen, fingers are always the same physical size and people have the same degreee of physical accuracy so you want your buttons and similar controls to always be roughly the same physical size. Research has found 3/4 of an inch to be the sweet spot for touchscreen buttons. This isn't so much of a concern with Kinect though since you aren't directly touching anything - accuracy is dictated not by finger size but by sensor accuracy and users ability to precisely control finicky & lagging virtual cursors.
If you spend time playing with Kinect games, it quickly becomes clear that there are 4 interaction paradigms.
1) Pose-based commands. User strikes and holds a pose to invoke some application-wide or command (usually brining up a menu)
2) Hover buttons. User moves a virtual cursor over a button and holds still for a certain period of time to select the button
3) Swipe-based navigation and selection. User waves their hands in one direction to scroll and list and another direction to select from the list
4) Voice commands. User just speaks a command.
There are other mouse-like ideas that have been tried by hobbyists (havent seen these in an actual game) but frankly they suck: 1) using one hand for cursor and another hand to "click" where the cursor is or 2) using z-coordinate of the hand to determine whether to "click"
It's not clear to me whether you are asking about how to make some existing mouse widgets work with Kinect. If so, there are some projects on the web that will show you how to control the mouse with Kinect input but that's lame. It may sound super cool but you're really not at all taking advantage of what the device does best.
If I was building a music synthesizer, I would focus on approach #3 - swiping. Something like Dance Central. On the left side of the screen show a list of your MIDI controllers with some small visual indication of their status. Let the user swipe their left hand to scroll through and select a controller from this list. On the right side of the screen show how you are tracking the users right hand within some plane in front of their body. Now you're letting them use both hands at the same time, giving immediate visual feedback of how each hand is being interpretted, and not requiring them to be super precise.
ps... I'd also like to give a shout out to Josh Blake's upcomming NUI book. It's good stuff. If you really want to master this area, go order a copy :) http://www.manning.com/blake/