PhysX - Stick controllers to kinematic actors - game-engine

By default kinematic actors in PhysX will simply push controllers out of the way or ignore them:
http://youtu.be/2bJDOjFIrRI
This is obviously not the desired behavior for things like elevators or escalators.
I'm unsure on how to actually 'stick' the controller to the platform to make sure the player doesn't fall off.
I tried adding the kinematic target offset of the platform to the displacement vector when moving the controller every simulation step, however that doesn't prevent the 'pushing' from the kinematic actor and wasn't very accurate either.
How is this usually accomplished? The documentation mentions using obstacles for moving platforms, but I don't see how that would help in this case.
I'm using PhysX 3.3.0.

You may create virtual PxScene which represents moving platform. Its space will be considered as local space of the platform, so children controllers won't be pushed at all. Moreover you may add colliders prevented controller to move outside boundary of the platform.
Obviously, the disadvantage of the method above is using virtual scenes and multiple controllers. You will have to enhance your actors, add them ability to switch current scene. Moving platforms will also have to be more elaborated (they will need triggers to generate corresponding events of scene changing).
As regards the advantages, you will receive (for free!) precise kinematics of actor standing on horizontally moving platform.

Related

OOP organization with a Map

I have a question about organizing code while also displaying fundamental OOP principles. My task is to implement a world (MxN grid) with robots who get instructions to move around in the form of strings. They are also given an initial starting position and orientation. Instructions are performed in completion one robot at a time.
I made two classes, Robot and Map, but when I completed my coding I realized that the Map did not really do anything, and when I want to test functions within the Robot class (making sure coordinates are within bounds, etc.) it seems that the Map class is more of a hassle than anything. However, I feel like it is important for demonstrating the separation of things. In this case is it necessary to have two classes?
I think it does.
Map looks like a collection of Robots. It's like an Eulerian control volume that Robots flux in and out of. It keeps track of the space of acceptable locations in space and time. It maintains rules (e.g. "only one Robot in a square at a time"). Feels analogous to a Board for Chess or Checkers game.
The problem appears to be that you can't figure out what the meaningful state and behavior of a Map is.
I can see how a Robot would interact with a Map: It would propose a motion, which is a vector with direction and magnitude, and interrogate the Map to see if it ran afoul of any of the rules of motion for a Robot. Those are owned by the Map, not the Robot. Different Maps might allow different rules of motion (e.g. no diagonal moves, one square at a time, etc.)

Java3d viewPlateform vs viewingPlatform

I'm having trouble understanding what the difference is between the viewplateform and the viewingPLatform. Could someone please shed some light on this?
Have you seen the documentation?
quote:
The ViewPlatform leaf node object controls the position, orientation and scale of the viewer. It is the node in the scene graph that a View object connects to. A viewer navigates through the virtual universe by changing the transform in the scene graph hierarchy above the ViewPlatform.
ViewingPlatform is used to set up the "view" side of a Java 3D scene graph. The ViewingPlatform object contains a MultiTransformGroup node to allow for a series of transforms to be linked together. To this structure the ViewPlatform is added as well as any geometry to associate with this view platform.
/quote
Does that explain it will enough? You may need to do some wider reading to put some of this in context. The scene graph the Java3D employs abstracts us away from the nitty gritty of dealing with the tricky aspect of 3D graphics; we just have to learn how to deal with it from the higher level. Effectively, it's a tree into which all the graphical aspects are put, and it splits into two main forks; one for content, and one for controlling how you see that content.
These are both view fork objects. The 'ViewingPlatform' is a management node that bring other aspects together, where as the 'ViewPlatform' is a leaf node that controls specific aspects, as detailed above.

How to program an RPG game's scripted event/Cut scene system in a tile based RPG in objective C?

For background I have been working on an RPG based off Ray Wenderlich's tutorials. (Example)http://www.raywenderlich.com/1163/how-to-make-a-tile-based-game-with-cocos2d.
Now I am trying to build a scripted event/Cut scene system so that for instance when a player enters a building, the different characters can have a discussion of the current events, before continuing the adventure. My only problem is I can't really visualize how one would implement this.
I would guess some sort of one time use trigger, maybe kept in a large switch statement on a singleton somewhere ? Which maybe draws all the temp characters ? Then the event then deactivates itself.
I am just looking for a blueprint of how one would do this. Although programming examples are welcome as well.
It depends a lot on how much time you want to commit to the system and how versatile you want the final system to be. A powerful cut-scene system can be flexible enough to be used in almost every interaction in a typical 2d RPG.
If you want to go all out I would suggest a heavily data drive approach. Keep as much data in files and use the filesystem to your advantage. If you say 'all the dialog scenes are in this folder' then when adding a new scene it just needs to be dropped in the folder rather than creating the scene then touching some master switch statement somewhere. Just keep in mind with a large system you want to make adding a new cutscene as simple as possible, not 400 different places to touch.
I would also stay away from switch statements for tracking progress in a cutscene. It adds a lot of code overhead per scene. Idly a cutscene would be a simple as an array of data and a position. Your cutscene manager, the singleton, can parse through the array, decode the data into commands and fire them off.
Sorry if thats a big vague but a lot of these decisions depend on how your engine is structured and what you want out of the system. Keep in mind that the more general the system is, the more uses you may find for it going forward but it will take longer to get up in running to begin with.
you can just check for the tile you are on while you are moving, and when you are on a specific tile you can start a cutscene, also you can add a tag via TiledEditor (this is the recommended editor for using with CCTMXTiledMap) to your map to specify where the cutscene should begin just like he pointed the character start point in that tutorial. the you check for the triger specified (either a specific tile or what the point taged in map) in every gamecycle. and then it's almost very easy you just freeze controls and play a prerecorded camera and object movements till the cutscene finishes. the restore the game to normal mode and turn off checking for the triger.

What is the difference between a Morph in Morphic and a NSView in Cocoa?

I'd like to know about the things that make Morphic special.
Morphic is much more than NSView or any other graphics class that simply allow the re-implementation of a limited set of features. Morphic is an extremely malleable UI construction kit. Some design ideas behind Morphic makes this intention clear:
A comprehensive hierarchy of 2D coordinate systems is included. They are not restricted to Cartesian or linear. Useful nonlinear coordinate systems include polar, logarithmic,hyperbolic and geographic (map like) projections.
Separation of the handling of coordinate systems from the morphs themselves. A morph should only need to select its preferred coordinate system, instead of needing to convert every point it draws to World coordinates by itself. Its #drawOn: method and the location of its sub-morphs are expressed in its own coordinate system.
Complete independency of Display properties such as size or resolution. There is no concept of pixel. The GUI is thought at a higher level. All the GUI is independent of pixel resolution. All the rendering is anti aliased.
Separating the coordinate system eases the moving, zooming and rotation of morphs.
All coordinates are Float numbers. This is good for allowing completely arbitrary scales without significant rounding errors.
The Morph hierarchy is not a hierarchy of shapes. Morphs don't have a concept of a border or color. There is no general concept of submorph aligning. A particular morph may implement these in any way that makes sense for itself.
Morphic event handling is flexible and allows you to send events to arbitrary objects. That object need not subclass Morph.
Warning: Smalltalk's live dynamic environment is a red pill. Static, frozen languages will never be the same for you ;-)
In a nutshell: Morphic is a virtual world where you can directly explore live objects (just like the real world). Did you ever look at a UI and...
wonder "wow, that's really cool! How did they do that?"
kvetch "I wish they had done X instead!"
While these thoughts would lead to pain and frustration in any other environment, not so in Morphic.
If you want to blow your mind, become a god in a Morphic world:
Launch a Pharo image, and click on the background (which is actually the "World") to bring up the world menu:
Bring up the "halos" on one of the menu options (shift-alt-click on my Mac):
Drag the "Pick Up" halo (top-middle) and drop it somewhere in the world:
Enjoy your menu item which is now available wherever you want it:
Seriously, click it and watch the Browser open!!
Ever have an option you always use that a vendor has buried three-menu-levels deep? Could this be useful?! This is a glimpse of the power of a live direct GUI environment like Morphic.
If you're intrigued, read John Maloney & Randall Smith's paper Directness and Liveness in the Morphic User Interface Construction Environment
The title do not map your question, so I answer your question and not the title.
I have read about Morphic the last two days and conclude with what I think makes morphic special.
Morphic is perfect for live coding. That is a direct mapping such that when code is changed the output on the screen change. And/or if morphs on screen is changed (draged) the values in the code is changed. That is cool in art performance!
But Morphic aims for higher abstractions than that. The properties of the morphs is abstracted away from the code. Do the SoC to a file or fetch a server-side database.
I suppose WebStorage and JavaScript file is a good option to store the liveness state of the Morph properties changed interactively. You see - programming is done through each Morph. Then the code need only to handle events from click and drag.
The aim for research has even abstracted the code away. Coding can be done through the Morph to define what happen with click or drag. Morphs can be puzzle pieces as in Scratch.
A program has to be backed up into a file somewhere. I don't consider to do coding on a cloud as safe. So a JS-file is the only alternative (if not setting up a server is an option), because datafiles is not allowed locally, not even in same folder as the web app. The Same Origin policy means same server - not same folder.
When the app starts the Javascript file (or Web Storage in first place) will set up the world of morphs. User interacts with that world. The new state can be stored in WebStorage and bacuped by a download.
You can use Lively Kernel as the language in the file or store the morph data in an object or whatever you find simplest to generate as a file to download.
So what is special with this? I am not repeat the accepted answer, but this is my conclusion:
Everything you see on the Morphic screen is a morph.
The tree of morphs is called a world.
The coordinate, dimension and properties of each morph is abstracted away from the code into the tree.
The research aims for abstract away the code too.

Event handling in component based game engine design

I imagine this question or variations of it get passed around a lot, so if what I'm saying is a duplicate, and the answers lie elsewhere, please inform me.
I have been researching game engine designs and have come across the component-based entity model. It sounds promising, but I'm still working out its implementation.
I'm considering a system where the engine is arranged of several "subsystems," which manage some aspect, like rendering, sound, health, AI, etc. Each subsystem has a component type associated with it, like a health component for the health subsystem. An "entity," for example an NPC, a door, some visual effect, or the player, is simply composed of one or more components, that when together give the entity its functionality.
I identified four main channels of information passing: a component can broadcast to all components in its current entity, a component can broadcast to its subsystem, a subsystem can broadcast to its components, and a subsystem can broadcast to other subsystems.
For example, if the user wanted to move their characters, they would press a key. This key press would be picked up by input subsystem, which then broadcasts the event and would be picked up by the player subsystem. The player subsystem then sends this event to all player components (and thus the entities those components compose), and those player components would communicate to its own entity's position component to go ahead and move.
All of this for a key press seems a bit winded, and I am certainly open to improvements to this architecture. But anyway, my main question still follows.
As for the events themselves, I considered where an event behaves as in the visitor pattern. The importance of what I want is that if an event comes across a component it doesn't support (as in a move event has nothing directly to do with AI or health), it would ignore the component. If an event doesn't find the component it's going after, it doesn't matter.
The visitor pattern almost works. However, it would require that I have virtual functions for every type of component (i.e. visitHealthComponent, visitPositionComponent, etc.) even if it doesn't have anything to do with them. I could leave these functions empty (so if it did come across those components, it would be ignored), but I would have to add another function every time I add a component.
My hopes were that I would be able to add a component without necessarily adding stuff to other places, and add an event without messing with other stuff.
So, my two questions:
Are there any improvements my design could allow, in terms of efficiency, flexibility, etc.?
What would be the optimal way to handle events?
I have been thinking about using entity systems for one of my own projects and have gone through a similar thought process. My initial thought was to use an Observer pattern to deal with events - I too, originally considered some kind of visitor pattern, but decided against it for the very reasons you bring up.
My thoughts are that the subsystems will provide a subsystem specific publish/subscribe interface, and thus subsystem dependencies will be resolved in a "semi-loosely" coupled fashion. Any subsystem that depends on events from another subsystem will know of the subscriber interface to that subsystem and thus can effectively make use of it.
Unfortunately, how these subscribers get handles to their publishers is still somewhat of an issue in my mind. At this point, I am favoring some kind of dynamic creation where each subsystem is instantiated, and then a second phase is used to resolve the dependencies and put all the subsystems into a "ready state".
Anyway, I am very interested in what worked out for you and any problems you encountered on your project :)
Use an event bus, aka event aggregator. What you want is an event mechanism that requires no coupling between subsystems, and an event bus will do just that.
http://martinfowler.com/eaaDev/EventAggregator.html
http://stackoverflow.com/questions/2343980/event-aggregator-implementation-sample-best-practices
etc
this architecture described here http://members.cox.net/jplummer/Writings/Thesis_with_Appendix.pdf
There are at least three problems I encountered implementing this in a real project:
systems aren't notified when something happen - only way is to ask about it - player is dead? wall isn't visible? and so on - to avoid this you can use simple MVC instead of observer pattern.
what if your object is a composit (i.e. consists of objects)? system will traverse through all hierarchy and asking about component state.
And main disadvantage is that this architecture mixes all together -for e.g why do player need to know that you pressed a key?
i think that answer is layered architectures with abstracted representation...
Excuse my bad English.
I am writing a flexible and scalable java 3d Game Engine based on Entity-Component System. I have finished some basic parts of it.
First i want to say something about ECS architecture, I don't agree that a component can communicate with other components in a same entity. Components should only store data and systems process them.
In event handling part, I think the basic input handling should not be included in a ECS. Instead, I have a System called Intent System and have a Component called Intent Component which contains many intents. A intent means a entity wants to do something toward a entity.
the Intent System process all the intents, When it processes a intent, it broadcasts the corresponding information to other systems or add other components to the entity.
I also write a interface called Intent Generator. In local game, you can implement a Keyboard Input or Mouse Input Generator and in multiple-player game, you can implement network intent generator. In AI system, you can also generate intents.
You may think the Intent System processes too many things in the game. But in fact, it shares many processing to other systems And I also write a Script System. For specific special entity it has a script component doing special things.
Originally when I develop something, I always want to make a great architecture which includes every thing. But for game developing sometimes it is very inefficient. Different game object may have completely different functions. ECS is great as data-oriented programming system. but we can not include every thing in it for a complete game.
By the way, Our ECS-based game engine will be open source in near future, then you can read it. If u are interested in it, I also invite u to join us.