Java3d viewPlateform vs viewingPlatform - java-3d

I'm having trouble understanding what the difference is between the viewplateform and the viewingPLatform. Could someone please shed some light on this?

Have you seen the documentation?
quote:
The ViewPlatform leaf node object controls the position, orientation and scale of the viewer. It is the node in the scene graph that a View object connects to. A viewer navigates through the virtual universe by changing the transform in the scene graph hierarchy above the ViewPlatform.
ViewingPlatform is used to set up the "view" side of a Java 3D scene graph. The ViewingPlatform object contains a MultiTransformGroup node to allow for a series of transforms to be linked together. To this structure the ViewPlatform is added as well as any geometry to associate with this view platform.
/quote
Does that explain it will enough? You may need to do some wider reading to put some of this in context. The scene graph the Java3D employs abstracts us away from the nitty gritty of dealing with the tricky aspect of 3D graphics; we just have to learn how to deal with it from the higher level. Effectively, it's a tree into which all the graphical aspects are put, and it splits into two main forks; one for content, and one for controlling how you see that content.
These are both view fork objects. The 'ViewingPlatform' is a management node that bring other aspects together, where as the 'ViewPlatform' is a leaf node that controls specific aspects, as detailed above.

Related

Making Unreal Engine 4 Objects Like Cubes Swimable

I'm developing a Unreal Engine 4 Survival Game and I have so far allocated to areas for lakes and have entered cubes with water like textures and I want to make it so that you can enter the cube and go into a swimming position. I also need it to have a different color texture when you move into places so that 2 meters away it will be dark and closer light and when you move it changes. I dont mind if we can do one thing at a time. If you guys can help me I would be extremly grateful! Hope you can help!
Set the cube up as a trigger and collide with pawns. When the Cube blueprint has an "OnActorBeginOverlap" cast it to pawn blueprint then call a function on it to tell it to go to swim mode.
https://docs.unrealengine.com/latest/INT/Engine/Blueprints/UserGuide/Events/
You need to first add a collision component in the blueprint viewport of the cube, then place events for triggering material effects, which simulate underwater scene, or you can fairly simply create a post-processing volume like that one on YouTube: w[]ww.youtube.com/watch?v=fLtSfG8f6NM ( Delete the "[]"s )
For super flamboyant but hard effect, check this one out: w[]ww.youtube.com/watch?v=8jbK00s2tKg ( Delete the "[]"s )
The most important part is to implement underwater floating movements. Create another character blueprint that has a floating pawn component in it. After that, add an event to spawn this floating pawn as a runtime object in level blueprint using "SpawnActorByClass" blueprint( tip: using C++ is much more convenient for manipulating runtime objects ) and possess it( https://docs.unrealengine.com/latest/INT/Gameplay/HowTo/PossessPawns/Blueprints/ ). You can modify some floating pawn settings to get better effects.
Plus: Feel free to ask me any questions. In addition, YouTube is a fantastic place for learning UE4. ExpressVPN works if you can't visit YouTube.

Automated Testing on graphic outputs

I wonder is there any tool or standard methods to automatically testing programs that produce graphic outputs.
For example, a simple painting application is built allowing users to draw circles and rectangles in specific locations. The tests probably need to check whether the shapes are located in the exact place as specified.
My problem is: is there a standard way to automate the test procedure instead of letting tester manually check the outputs again and again?
There are several approaches, but the most important is, that the GUI part comes last.
The GUI is only responsible (or should be) for visualizing data. This implies that you have some underlying models and functionality, which is told to create a circle or rectangle at a certain position. You would usually test this first in order to make sure, your functionality does the right things and the underlying data is correct. Functionality and models can be fully covered by regular API tests.
Your particular question is to check, whether the visualization part is correct. You have IMHO two options for automation:
Use screenshots and diff the drawing canvas between a static expectation screenshot and the actual test result
Use tracing: You would take a screenshot from the canvas area and convert it to some vector image that allows you to check certain vectors for being at the right place
In general, GUI specifics such as the right color, exact placing are still human tasks. You can only try testing as much as you can using API tests and reduce the human part to a minimum.

OOP organization with a Map

I have a question about organizing code while also displaying fundamental OOP principles. My task is to implement a world (MxN grid) with robots who get instructions to move around in the form of strings. They are also given an initial starting position and orientation. Instructions are performed in completion one robot at a time.
I made two classes, Robot and Map, but when I completed my coding I realized that the Map did not really do anything, and when I want to test functions within the Robot class (making sure coordinates are within bounds, etc.) it seems that the Map class is more of a hassle than anything. However, I feel like it is important for demonstrating the separation of things. In this case is it necessary to have two classes?
I think it does.
Map looks like a collection of Robots. It's like an Eulerian control volume that Robots flux in and out of. It keeps track of the space of acceptable locations in space and time. It maintains rules (e.g. "only one Robot in a square at a time"). Feels analogous to a Board for Chess or Checkers game.
The problem appears to be that you can't figure out what the meaningful state and behavior of a Map is.
I can see how a Robot would interact with a Map: It would propose a motion, which is a vector with direction and magnitude, and interrogate the Map to see if it ran afoul of any of the rules of motion for a Robot. Those are owned by the Map, not the Robot. Different Maps might allow different rules of motion (e.g. no diagonal moves, one square at a time, etc.)

PhysX - Stick controllers to kinematic actors

By default kinematic actors in PhysX will simply push controllers out of the way or ignore them:
http://youtu.be/2bJDOjFIrRI
This is obviously not the desired behavior for things like elevators or escalators.
I'm unsure on how to actually 'stick' the controller to the platform to make sure the player doesn't fall off.
I tried adding the kinematic target offset of the platform to the displacement vector when moving the controller every simulation step, however that doesn't prevent the 'pushing' from the kinematic actor and wasn't very accurate either.
How is this usually accomplished? The documentation mentions using obstacles for moving platforms, but I don't see how that would help in this case.
I'm using PhysX 3.3.0.
You may create virtual PxScene which represents moving platform. Its space will be considered as local space of the platform, so children controllers won't be pushed at all. Moreover you may add colliders prevented controller to move outside boundary of the platform.
Obviously, the disadvantage of the method above is using virtual scenes and multiple controllers. You will have to enhance your actors, add them ability to switch current scene. Moving platforms will also have to be more elaborated (they will need triggers to generate corresponding events of scene changing).
As regards the advantages, you will receive (for free!) precise kinematics of actor standing on horizontally moving platform.

What is the difference between a Morph in Morphic and a NSView in Cocoa?

I'd like to know about the things that make Morphic special.
Morphic is much more than NSView or any other graphics class that simply allow the re-implementation of a limited set of features. Morphic is an extremely malleable UI construction kit. Some design ideas behind Morphic makes this intention clear:
A comprehensive hierarchy of 2D coordinate systems is included. They are not restricted to Cartesian or linear. Useful nonlinear coordinate systems include polar, logarithmic,hyperbolic and geographic (map like) projections.
Separation of the handling of coordinate systems from the morphs themselves. A morph should only need to select its preferred coordinate system, instead of needing to convert every point it draws to World coordinates by itself. Its #drawOn: method and the location of its sub-morphs are expressed in its own coordinate system.
Complete independency of Display properties such as size or resolution. There is no concept of pixel. The GUI is thought at a higher level. All the GUI is independent of pixel resolution. All the rendering is anti aliased.
Separating the coordinate system eases the moving, zooming and rotation of morphs.
All coordinates are Float numbers. This is good for allowing completely arbitrary scales without significant rounding errors.
The Morph hierarchy is not a hierarchy of shapes. Morphs don't have a concept of a border or color. There is no general concept of submorph aligning. A particular morph may implement these in any way that makes sense for itself.
Morphic event handling is flexible and allows you to send events to arbitrary objects. That object need not subclass Morph.
Warning: Smalltalk's live dynamic environment is a red pill. Static, frozen languages will never be the same for you ;-)
In a nutshell: Morphic is a virtual world where you can directly explore live objects (just like the real world). Did you ever look at a UI and...
wonder "wow, that's really cool! How did they do that?"
kvetch "I wish they had done X instead!"
While these thoughts would lead to pain and frustration in any other environment, not so in Morphic.
If you want to blow your mind, become a god in a Morphic world:
Launch a Pharo image, and click on the background (which is actually the "World") to bring up the world menu:
Bring up the "halos" on one of the menu options (shift-alt-click on my Mac):
Drag the "Pick Up" halo (top-middle) and drop it somewhere in the world:
Enjoy your menu item which is now available wherever you want it:
Seriously, click it and watch the Browser open!!
Ever have an option you always use that a vendor has buried three-menu-levels deep? Could this be useful?! This is a glimpse of the power of a live direct GUI environment like Morphic.
If you're intrigued, read John Maloney & Randall Smith's paper Directness and Liveness in the Morphic User Interface Construction Environment
The title do not map your question, so I answer your question and not the title.
I have read about Morphic the last two days and conclude with what I think makes morphic special.
Morphic is perfect for live coding. That is a direct mapping such that when code is changed the output on the screen change. And/or if morphs on screen is changed (draged) the values in the code is changed. That is cool in art performance!
But Morphic aims for higher abstractions than that. The properties of the morphs is abstracted away from the code. Do the SoC to a file or fetch a server-side database.
I suppose WebStorage and JavaScript file is a good option to store the liveness state of the Morph properties changed interactively. You see - programming is done through each Morph. Then the code need only to handle events from click and drag.
The aim for research has even abstracted the code away. Coding can be done through the Morph to define what happen with click or drag. Morphs can be puzzle pieces as in Scratch.
A program has to be backed up into a file somewhere. I don't consider to do coding on a cloud as safe. So a JS-file is the only alternative (if not setting up a server is an option), because datafiles is not allowed locally, not even in same folder as the web app. The Same Origin policy means same server - not same folder.
When the app starts the Javascript file (or Web Storage in first place) will set up the world of morphs. User interacts with that world. The new state can be stored in WebStorage and bacuped by a download.
You can use Lively Kernel as the language in the file or store the morph data in an object or whatever you find simplest to generate as a file to download.
So what is special with this? I am not repeat the accepted answer, but this is my conclusion:
Everything you see on the Morphic screen is a morph.
The tree of morphs is called a world.
The coordinate, dimension and properties of each morph is abstracted away from the code into the tree.
The research aims for abstract away the code too.