What is the difference between a Morph in Morphic and a NSView in Cocoa? - objective-c

I'd like to know about the things that make Morphic special.

Morphic is much more than NSView or any other graphics class that simply allow the re-implementation of a limited set of features. Morphic is an extremely malleable UI construction kit. Some design ideas behind Morphic makes this intention clear:
A comprehensive hierarchy of 2D coordinate systems is included. They are not restricted to Cartesian or linear. Useful nonlinear coordinate systems include polar, logarithmic,hyperbolic and geographic (map like) projections.
Separation of the handling of coordinate systems from the morphs themselves. A morph should only need to select its preferred coordinate system, instead of needing to convert every point it draws to World coordinates by itself. Its #drawOn: method and the location of its sub-morphs are expressed in its own coordinate system.
Complete independency of Display properties such as size or resolution. There is no concept of pixel. The GUI is thought at a higher level. All the GUI is independent of pixel resolution. All the rendering is anti aliased.
Separating the coordinate system eases the moving, zooming and rotation of morphs.
All coordinates are Float numbers. This is good for allowing completely arbitrary scales without significant rounding errors.
The Morph hierarchy is not a hierarchy of shapes. Morphs don't have a concept of a border or color. There is no general concept of submorph aligning. A particular morph may implement these in any way that makes sense for itself.
Morphic event handling is flexible and allows you to send events to arbitrary objects. That object need not subclass Morph.

Warning: Smalltalk's live dynamic environment is a red pill. Static, frozen languages will never be the same for you ;-)
In a nutshell: Morphic is a virtual world where you can directly explore live objects (just like the real world). Did you ever look at a UI and...
wonder "wow, that's really cool! How did they do that?"
kvetch "I wish they had done X instead!"
While these thoughts would lead to pain and frustration in any other environment, not so in Morphic.
If you want to blow your mind, become a god in a Morphic world:
Launch a Pharo image, and click on the background (which is actually the "World") to bring up the world menu:
Bring up the "halos" on one of the menu options (shift-alt-click on my Mac):
Drag the "Pick Up" halo (top-middle) and drop it somewhere in the world:
Enjoy your menu item which is now available wherever you want it:
Seriously, click it and watch the Browser open!!
Ever have an option you always use that a vendor has buried three-menu-levels deep? Could this be useful?! This is a glimpse of the power of a live direct GUI environment like Morphic.
If you're intrigued, read John Maloney & Randall Smith's paper Directness and Liveness in the Morphic User Interface Construction Environment

The title do not map your question, so I answer your question and not the title.
I have read about Morphic the last two days and conclude with what I think makes morphic special.
Morphic is perfect for live coding. That is a direct mapping such that when code is changed the output on the screen change. And/or if morphs on screen is changed (draged) the values in the code is changed. That is cool in art performance!
But Morphic aims for higher abstractions than that. The properties of the morphs is abstracted away from the code. Do the SoC to a file or fetch a server-side database.
I suppose WebStorage and JavaScript file is a good option to store the liveness state of the Morph properties changed interactively. You see - programming is done through each Morph. Then the code need only to handle events from click and drag.
The aim for research has even abstracted the code away. Coding can be done through the Morph to define what happen with click or drag. Morphs can be puzzle pieces as in Scratch.
A program has to be backed up into a file somewhere. I don't consider to do coding on a cloud as safe. So a JS-file is the only alternative (if not setting up a server is an option), because datafiles is not allowed locally, not even in same folder as the web app. The Same Origin policy means same server - not same folder.
When the app starts the Javascript file (or Web Storage in first place) will set up the world of morphs. User interacts with that world. The new state can be stored in WebStorage and bacuped by a download.
You can use Lively Kernel as the language in the file or store the morph data in an object or whatever you find simplest to generate as a file to download.
So what is special with this? I am not repeat the accepted answer, but this is my conclusion:
Everything you see on the Morphic screen is a morph.
The tree of morphs is called a world.
The coordinate, dimension and properties of each morph is abstracted away from the code into the tree.
The research aims for abstract away the code too.

Related

difference between module and box

i want to know what is the difference between box and module in programming
i have been asked this question
and somehow i am confuse now
reading on web and document that what is box in programming
and i found below link
https://www.nbs-system.com/en/blog/black-box-grey-box-white-box-testing-what-differences/
if the box is the top link and similar link then the box is the testing of the
program and module is the proram
The "box" is just a common word from an object you know from real world. It servers as an analogy with code put together to form some kind of software component. This is because it's internal parts are so related to each other, on responsabilities and communication, on internal data use, on overall goal, that it makes sense to group them together. We name it module and many times this maps to a source file (but doesn't have to), but the same grouping concept applies to classes, packages (usually modules/classes grouped together), libraries or even complete applications or systems.
What the box term mostly refers to is the fact that there is a frontier between what is inside the box and actors interacting with it from the outside (users or other systems).
The box has to present a public interface to external world, in order to become usable, or useful. This has to be documented, otherwise you don't know how to use it. When you use it based on this info only to achieve your goals, not knowing anything about the internals, we say you are using a "black box".
This has lots of advantages because it can make the box a powerful abstraction that is simple to use, wrapping a possible complex implementation inside. It encapsulates things and hides them from the user.
If by any means, you as an external actor use the box in some way, because you know how something is done inside, you are violating this encapsulation principle. You are probably entering in "grey box" usage mode. This is dangerous because if the box is changed your assumptions (and code) may fail.
When using it as a "white box" you really know entirely how it is made inside so you can make very well informed decisions in your code, but now the box code can't be touched really. So there goes the abstraction.
When coding, you mostly want to code against black boxes, and want also to build your own black boxes, for abstraction, cohesion, and modularity reasons.
Grey and white boxes make most sense when it comes the time (hopefully from the start) to test the code you have written. Here you still want to test your system as a black box, but want also to use white box testing, because you want both observable behaviour and internal detailed behaviour to be correct.
Grey testing in particular applies probably when you are testing code you have written that uses other modules or libraries you have not, and generally do not want to test (code of others was tested already), but still you have some knowledge about its internals and you make additional tests to cover your code, that explore this knowledge.
Edit:
So unless they want you to distinguish the module as the code that's inside, and the box as the wrapping public interface for the module, there's no difference actually.

Making Unreal Engine 4 Objects Like Cubes Swimable

I'm developing a Unreal Engine 4 Survival Game and I have so far allocated to areas for lakes and have entered cubes with water like textures and I want to make it so that you can enter the cube and go into a swimming position. I also need it to have a different color texture when you move into places so that 2 meters away it will be dark and closer light and when you move it changes. I dont mind if we can do one thing at a time. If you guys can help me I would be extremly grateful! Hope you can help!
Set the cube up as a trigger and collide with pawns. When the Cube blueprint has an "OnActorBeginOverlap" cast it to pawn blueprint then call a function on it to tell it to go to swim mode.
https://docs.unrealengine.com/latest/INT/Engine/Blueprints/UserGuide/Events/
You need to first add a collision component in the blueprint viewport of the cube, then place events for triggering material effects, which simulate underwater scene, or you can fairly simply create a post-processing volume like that one on YouTube: w[]ww.youtube.com/watch?v=fLtSfG8f6NM ( Delete the "[]"s )
For super flamboyant but hard effect, check this one out: w[]ww.youtube.com/watch?v=8jbK00s2tKg ( Delete the "[]"s )
The most important part is to implement underwater floating movements. Create another character blueprint that has a floating pawn component in it. After that, add an event to spawn this floating pawn as a runtime object in level blueprint using "SpawnActorByClass" blueprint( tip: using C++ is much more convenient for manipulating runtime objects ) and possess it( https://docs.unrealengine.com/latest/INT/Gameplay/HowTo/PossessPawns/Blueprints/ ). You can modify some floating pawn settings to get better effects.
Plus: Feel free to ask me any questions. In addition, YouTube is a fantastic place for learning UE4. ExpressVPN works if you can't visit YouTube.

OOP organization with a Map

I have a question about organizing code while also displaying fundamental OOP principles. My task is to implement a world (MxN grid) with robots who get instructions to move around in the form of strings. They are also given an initial starting position and orientation. Instructions are performed in completion one robot at a time.
I made two classes, Robot and Map, but when I completed my coding I realized that the Map did not really do anything, and when I want to test functions within the Robot class (making sure coordinates are within bounds, etc.) it seems that the Map class is more of a hassle than anything. However, I feel like it is important for demonstrating the separation of things. In this case is it necessary to have two classes?
I think it does.
Map looks like a collection of Robots. It's like an Eulerian control volume that Robots flux in and out of. It keeps track of the space of acceptable locations in space and time. It maintains rules (e.g. "only one Robot in a square at a time"). Feels analogous to a Board for Chess or Checkers game.
The problem appears to be that you can't figure out what the meaningful state and behavior of a Map is.
I can see how a Robot would interact with a Map: It would propose a motion, which is a vector with direction and magnitude, and interrogate the Map to see if it ran afoul of any of the rules of motion for a Robot. Those are owned by the Map, not the Robot. Different Maps might allow different rules of motion (e.g. no diagonal moves, one square at a time, etc.)

I need help choosing a game engine for a very specific task

I need a 3D engine for a very specific task in Artificial Intelligence, and I'd like some input.
The first part is the trivial one - basically, all I need is a FPS engine (3rd person would be good, too), such that it allows me to navigate a room and interact with objects (if you have Java and Windows, I'm looking for something similar to the Give Challenge, but a little more up-to-date). Physics would be nice, but is not a must.
Now, the non-trivial part would be: I need to impose a virtual grid over this room, such that at any moment I can say "the player is located at B5 - now he moved to B6", and so on. I need to redirect this information to another system (namely, one which will give the player instructions about what to do) and, at the same time, send messages to the player, so I must be able to have a single point through which the game logic passes through; also, I'd love not having to write my own collision detection and such.
So far, I've tried:
the Source SDK: it seems a little overkill (since I'm not really planning to shoot anyone, at least half the code base is useless to the task), and since I'm not really a Windows developer, I'm spending too much time with the "easy" stuff (such as getting VS up and running). Plus, cross-platform would be really nice.
Blender game engine: while this worked decently, the interaction model seems a little weird, and some easy stuff (such as making sure the camera stays inside the scene or showing the mouse on screen) gets too weird too soon.
Crystalspace 3D: I've tried their demos, but it looks a little old-fashioned, and since that was one of the problems of previous engines (it's easier to get volunteers when your game looks nice) I'd like to try something else.
Now, maybe I'm asking a little too much for a single software, but I'd love some input. Can anyone suggest me an alternative? Or should I give one of the previous ones a second chance?
Try the UDK. All of the things you request are present, and it's free for personal/noncommercial projects. Here are some highlights:
Modern looking. The UDK features an intuitive-ish visual material design system, post-processing effects, Scaleform Gfx UIs from Autodesk, and more.
A visual scripting interface called Kismet that can control gameplay elements, the camera, and more.
UnrealScript, a scripting language similar in syntax to C, C++, Java, that gives you the ability to extend existing functionality or create your own.
Comprehensive documentation available on UDN.
Lots of community support outside of Epic, in places such as Polycount, Eat3d, 3dbuzz, and more.
Basically, "and more".
If what you're looking for is a professional, free (as in beer) engine that will allow you to focus primarily or solely on your differentiating gameplay features, Epic has set the bar high.

How to program an RPG game's scripted event/Cut scene system in a tile based RPG in objective C?

For background I have been working on an RPG based off Ray Wenderlich's tutorials. (Example)http://www.raywenderlich.com/1163/how-to-make-a-tile-based-game-with-cocos2d.
Now I am trying to build a scripted event/Cut scene system so that for instance when a player enters a building, the different characters can have a discussion of the current events, before continuing the adventure. My only problem is I can't really visualize how one would implement this.
I would guess some sort of one time use trigger, maybe kept in a large switch statement on a singleton somewhere ? Which maybe draws all the temp characters ? Then the event then deactivates itself.
I am just looking for a blueprint of how one would do this. Although programming examples are welcome as well.
It depends a lot on how much time you want to commit to the system and how versatile you want the final system to be. A powerful cut-scene system can be flexible enough to be used in almost every interaction in a typical 2d RPG.
If you want to go all out I would suggest a heavily data drive approach. Keep as much data in files and use the filesystem to your advantage. If you say 'all the dialog scenes are in this folder' then when adding a new scene it just needs to be dropped in the folder rather than creating the scene then touching some master switch statement somewhere. Just keep in mind with a large system you want to make adding a new cutscene as simple as possible, not 400 different places to touch.
I would also stay away from switch statements for tracking progress in a cutscene. It adds a lot of code overhead per scene. Idly a cutscene would be a simple as an array of data and a position. Your cutscene manager, the singleton, can parse through the array, decode the data into commands and fire them off.
Sorry if thats a big vague but a lot of these decisions depend on how your engine is structured and what you want out of the system. Keep in mind that the more general the system is, the more uses you may find for it going forward but it will take longer to get up in running to begin with.
you can just check for the tile you are on while you are moving, and when you are on a specific tile you can start a cutscene, also you can add a tag via TiledEditor (this is the recommended editor for using with CCTMXTiledMap) to your map to specify where the cutscene should begin just like he pointed the character start point in that tutorial. the you check for the triger specified (either a specific tile or what the point taged in map) in every gamecycle. and then it's almost very easy you just freeze controls and play a prerecorded camera and object movements till the cutscene finishes. the restore the game to normal mode and turn off checking for the triger.