Dragging in processing.js - processing.js

I am a physics teacher in London and I am trying to learn processing.js
To make teaching resources a very important technique is to be able to drag shapes around. Although I know how to do this in PJS, I have found that the code for having several draggable objects quickly gets messy. (especially if the object is "locked", so that it does not matter if the cursor goes off the object)
Does anybody know how to run the dragging spript from a separate file? i.e. so that the main script calls the dragging script for objects? The idea is that you would draw shapes and simply make them draggable, with the dragging code in a separate file? This would make the creation of teacher resources a lot easier.
It would be great if people could provide some ideas on this. I have seen the drag demos on the main PJS website, but I am looking for something quicker/easier.
Many thanks
Matt Klein
ruby_murray1[AT]hotmail.com

Well, I do processing.js in pure javascript code without bothering with the Processing syntax but it should go something similar:
Make the objects that you want draggable adhere to a Draggable interface, the draggable interface indicates what is draggable and provides a method to move an object
When drag starts, see if there is a Draggable object under the mouse that you want to drag, store it locally and use the Draggable interface method to move the object around. This way your local dragging code is generic to any Draggable object and objects handle their own movement.
On drag end, remove the Draggable object from your local store (and stop calling its move method).
You could pull out this entire dragging logic into an external file as well, as long as you hook it into the correct mouse events.
About Interfaces: http://forum.processing.org/topic/class-interface-block-example

Related

Design pattern for child calling method in parent

I am currently working on my biggest project and I am having trouble figuring out how to structure my code. I'm looking for some guidance.
I have 2 objects a Tile and Container. Each Tile has a 2D coordinate and are all children of the Container. The Container has methods that return tile for location, switch tiles, add tiles, and remove tiles.
Now when you click on a tile it disappears, that was easy because it was self contained. The problem comes when I created different types of tiles that inherit from the base Tile. Each different type of tile does a different action when you click on it. Some destroy surrounding tiles some switch with other tiles and others add new tiles. For simplicity we will call these 3 subclasses Tile-destroy, Tile-swap, and Tile-add.
My problem is when I click on these tiles how can they act on other tiles in the Container. Should I just call functions in the parent class or is there a better way to do this? I am having trouble #including the Tile in the Container as well as the other way around. I feel like its not a proper pattern.
I have it set up so when a click takes place the Container handles it and checks the type of tile that is clicked and acts from there with a large else-if statement however this makes it very difficult to add new tile types. Ideally all the information for what happens when you click on a tile is contained within each tile subclass.
Any ideas?
I can suggest you the simpliest design:
Your Container will be a game controller
Each tile has Parent property which is refer to Container
When you click on tile it sends Command to Container (for example, DestroyTile(x, y) or AddTile(x, y)
Container handle this commands and destroys, adds or swap tiles.
If you want really good and more decoupled design you can also create handlers for all operation types DestroyTileHandler, AddTileHandler. In Container on different commands you will just pass them [commands] to appropriate handler. Also you need to pass context object (like Field with tiles) to handler. This allows you to add and modify new operations without even changing Container code.
See related patterns: Command, Observer
Feel free to ask questions and good luck!

Whats the best structure to use when you have many characters that all have the same behavior and animation but different Sprite images?

I am making a game for my psychology lab that has different scenes (jungle, sea, desert, moon, dungeon ect.) but the character behavior for each scene is essentially the same. Is it possible to write a class that will have all of the essential behaviors and animations that every sprite will need and then have subclasses that inherit from this class (I would only want to change the sprites image based on the scene in each subclass).
Sorry for late the answer, but I didn't see this until now. What you are after can fairly easily be achieved in Spritebuilder, without the need of subclassing. At least, it works well if your characters are composed of a CCNode with animated sprites. Lets say you have a ccb file set up with all the animations called JungleCharacter:
Right click the JungleCharacter file and choose "Duplicate". Now, in the new file (lets say we call it SeaCharacter) you select each sprite and in the 'Item properties' pane (on the right hand side) you can change the sprite frame. So, if you'd have a sprite frame called "JungleCharacterLeftArm.png", you'd change it to the equivalent "SeaCharacterLeftArm.png".
If it's to tiresome to do this in Spritebuilder, you could opt to do it in your favorite text editor, since ccb files are xml files. You'll find them in the "Packages/SpriteBuilder Resources.sbpack" folder (right click and select "show package contents"). If you set up your image assets in a smart way like "Jungle/LeftArm.png", "Sea/LeftArm.png" you can then do a quick find and replace, replacing "Jungle" with "Sea" (you get the idea).
Hope this helps!

Exclude objects from camera in Three.js

i'm wondering if it's possible to hide a list of objects from a camera (used to build a reflaction map over a plan, simulating the water).
So basicly i'd want to hide a list of objects from the water reflaction.
The Object3D.visible property will of course hide the object for the main camera too so it's useless.
Any idea?
before you update the reflection camera, hide the objects, when the reflection map is rendered, make them visible again.
without your current code i wouldnt be able to provide you with example code since there are several ways to accomplish reflection.

How to make a custom trackball / eyeball control with Cocoa?

I'm writing my first Cocoa app and I would like to make a "trackball / eyeball / arcball / whatever it's called" button to rotate a 3D OpenGL scene.
There's a perfect example of this custom Cocoa control in Pages (Apple iWork suite) when you select a 3D chart. After some hacks, this control seems to be referenced as SFC3DRotateWidget. Here's a screenshot of the control in Pages.
Maybe this widget is reusable, but I didn't find how or where. So I try to recreate it.
I'm inexperienced with Cocoa so I'm not sure how to do that nor exactly where (i.e. what to do with Interface Builder, what to do with code...).
I'm not sure if I need to override the drawing function. I thought to use a textured button (Interface Builder) with a NSTrackingArea (code) to handle mouse events (move, drag, ...) but the area is necessarily rectangular. The interactive zones of the custom control used by Apple seem to follow the shape of the arrows. I've read on S.O. I can use NSBezierPath to create a more specific area (only via code?).
Does it sound good for you?
Do I miss something?
Let met know if you have any tips, tricks or resources you can share!
Thanks!
It sounds like you want to build a custom control. You do this by subclassing NSControl, which there is a guide on how to do. You can control the circular clickable area, and the responses to the mouse events by implementing the various methods. For example you can track mouse events with mouseDown: and the related methods.
You probably do not need to use any custom drawing code, NSImageView subviews with the various arrows will probably suite your purposes fine, unless you'd rather draw them in code.

Creating a Quartz Composer Style interface

I'm wanting to add a Quartz Composer "patch editor" style interface element to my Cocoa/Objective C(++) application. For those unfamiliar with QC, the patch editor is a visual representation of the patch graph: effectively showing each node and it's properties, and providing a mouse driven select/click/drag interface. It looks like...
Quartz Composer Example http://files.me.com/archgrove/ya1xhh
I'll be using it to render a specific type of multi-rooted tree, where each node has some associated text and an arc joining it to its children. Users will be clicking on the tree nodes to select them, as well as dragging them around.
At the moment, I'm using a custom NSView inside a scroll view that Quartz draws each node, the arcs etc at each render, and processes mouse and keyboard input by hand (including hit testing, movement and so forth). This seems brutally wheel reinventive, and doesn't interact all that well with Core Animation. I'm hoping someone has some general alternative advice. I'm pondering along the lines of...
An existing control/3rd party library I've overlooked
Make each node in the tree an NSView, and use the normal view structure to handle the input, whilst drawing the graphics in the same way. But then, the inter-node arc rendering doesn't seem to fit naturally into the design
Something using a single NSView still, but making each tree node and arc an individual layer
Something else
Thanks kindly,
adamw
You might want to give EFLaceView a look: FlowChartView on CocoaDev
Edit: download link on page above is dead. There is a version of EFLaceView on github.