Is pinch zoom supported in CreateJS? - createjs

Is the pinch zoom touch gesture supported in CreateJS? I can not find anything in the docs.
Thanks

There is no native support for gestures but once you enable it touch events are translated into mouse events and identified by pointerID property. Based on this I have been able implemented the pinch zoom gesture in my project (though I have not tested it beyond latest Android.)
This is a snippet from my project:
stage.on("mousedown", function (evt : createjs.MouseEvent) {
if (evt.pointerID == 0 || evt.pointerID == -1) { //touch 1 or mouse
touch1 = new createjs.Point(stage.globalToLocal(evt.stageX, 0).x, stage.globalToLocal(0, evt.stageY).y);
} else if (evt.pointerID == 1) { //touch 2
touch2 = new createjs.Point(stage.globalToLocal(evt.stageX, 0).x, stage.globalToLocal(0, evt.stageY).y);
}
});
stage.on("pressup", function (evt : createjs.MouseEvent) {
if (evt.pointerID == 0 || evt.pointerID == -1) { //touch 1 or mouse
touch1 = null;
} else if (evt.pointerID == 1) { //touch 2
touch2 = null;
}
});
stage.on("pressmove", function(evt : createjs.MouseEvent) {
if (evt.pointerID == -1 || evt.pointerID == 0) {
var touch = touch1;
} else if (evt.pointerID == 1) {
var touch = touch2;
}
var dX = stage.globalToLocal(evt.stageX, 0).x - touch.x;
var dY = stage.globalToLocal(0, evt.stageY).y - touch.y;
if (touch1 && touch2) var oldDist = distanceP(touch1, touch2);
touch.x += dX;
touch.y += dY;
//if both fingers are used zoom and move the canvas
if (touch1 && touch2) {
var newDist = distanceP(touch1, touch2);
var newZoom = zoom * newDist / oldDist;
zoomMap(newZoom, new createjs.Point((touch1.x+touch2.x)/2, (touch1.y + touch2.y)/2))
//if both fingers are used apply only half of the motion to each of them
dX /= 2;
dY /= 2;
}
map.x += dX;
map.y += dY;
stage.update();
});
function distanceP(p1 : createjs.Point, p2 : createjs.Point) : number {
return Math.sqrt((p2.x-p1.x)*(p2.x-p1.x) + (p2.y-p1.y)*(p2.y-p1.y));
}
function zoomMap(newZoom : number, zoomCenter : createjs.Point) {
....
}
NOTE: I am moving and zooming DO called Map. The stage itself is zoomed due to various devicePixelRatio's (retina display etc), that's why the use of globalToLocal functions.

No. EaselJS handles normal mouse events (to determine what is clicked), as well as some drag events, since determining a drag target is a common usage. Additionally, touch events are translated into mouse events (including multi-touch).
Things like swipe, pinch, and other gestures are not handled by the framework at this time.

Related

Tile based movement on Actionscript 2.0 (adobe flash cs6)

So I have got this problem where I do not know how to possible code tile based movement in 4 directions (NSWE) for action script 2.0.
I have this code but it is dynamic movement, which makes the char move in all 8 directions (NW,NE,SW,SE N,S,W,E). The goal is to limit the movements to tile based and in only 4 directions (NSEW)
onClipEvent(enterFrame)
{
speed=5;
if(Key.isDown(Key.RIGHT))
{
this.gotoAndStop(4);
this._x+=speed;
}
if(Key.isDown(Key.LEFT))
{
this.gotoAndStop(3);
this._x-=speed;
}
if(Key.isDown(Key.UP))
{
this.gotoAndStop(1);
this._y-=speed;
}
if(Key.isDown(Key.DOWN))
{
this.gotoAndStop(2);
this._y+=speed;
}
}
The most simple and straightforward way is to move that thing along X-axis OR along Y-axis, only one at a time, not both.
onClipEvent(enterFrame)
{
speed = 5;
dx = 0;
dy = 0;
// Figure out the complete picture of keyboard input.
if (Key.isDown(Key.RIGHT))
{
dx += speed;
}
if (Key.isDown(Key.LEFT))
{
dx -= speed;
}
if (Key.isDown(Key.UP))
{
dy -= speed;
}
if (Key.isDown(Key.DOWN))
{
dy += speed;
}
if (dx != 0)
{
// Move along X-axis if LEFT or RIGHT pressed.
// Ignore if it is none or both of them.
this._x += dx;
if (dx > 0)
{
this.gotoAndStop(4);
}
else
{
this.gotoAndStop(3);
}
}
else if (dy != 0)
{
// Ignore if X-axis motion is already applied.
// Move along Y-axis if UP or DOWN pressed.
// Ignore if it is none or both of them.
this._y += dy;
if (dy > 0)
{
this.gotoAndStop(2);
}
else
{
this.gotoAndStop(1);
}
}
}

Detecting Drag and Drop and detecting long press (XNA or as3)

i am trying to implement a drag and drop behaviour for an object, i need to register when the touch is being pressed and after moving it a bit it should start dragging.
unfortunately i am using a custom framework that is implemented in C# and it's something like XNA.
How would you do it on XNA?
(it's for an android tablet) .
Obviously this example should be refactored to use input services and better handling for the touch up/down concepts, but the core of the answer remains the same:
// I don't remember the exact property names, but you probably understand what I mean
public override void Update(GameTime time)
{
var currentState = TouchPanel.GetState();
var gestures = currentState.TouchGestures;
var oldGests = this.oldState.TouchGestures;
if (oldGests.Count == 0 && gestures.Count == 1 && gestures[0].State == Pressed)
{
// Went from Released to Pressed
this.touchPressed = true;
}
else
{
// Not the frame it went to Pressed
this.touchPressed = false;
}
if (oldGests.Count == 1 && gestures.Count == 1 && oldGests[0].State == Pressed && gestures[0].State == Pressed)
{
// Touch is down, and has for more than 1 frame (aka. the user is dragging)
this.touchDown = true;
}
else
{
this.touchDown = false;
}
if (oldGests.Count == 1 && oldGests[0].State == Pressed && gestures.Count == 0)
{
// Went from Released to Pressed
this.touchReleased = true;
}
else
{
// Not the frame it went to Pressed
this.touchReleased = false;
}
this.oldState = currentState;
}
You might have to fiddle with the 'ifs' a bit, since I don't actually remember if Gestures goes down to 0 the first frame after you release your finger, or if it still shows 1 Gesture called 'Released'. Test it, and you shall have your answer!
Now you can use 'this.touchDown' elsewhere in your code to determine if your object should move or not.

How to find the Joint coordinates(X,Y,Z) ,also how to draw a locus of the tracked joint?

I am trying to develop a logic to recognize a circle which is made by users right hand, I got the code to draw the skeleton and track from the sample code,
private void SensorSkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
{
Skeleton[] skeletons = new Skeleton[0];
using (SkeletonFrame skeletonFrame = e.OpenSkeletonFrame())
{
if (skeletonFrame != null)
{
skeletons = new Skeleton[skeletonFrame.SkeletonArrayLength];
skeletonFrame.CopySkeletonDataTo(skeletons);
}
}
using (DrawingContext dc = this.drawingGroup.Open())
{
// Draw a transparent background to set the render size
dc.DrawRectangle(Brushes.Black, null, new Rect(0.0, 0.0, RenderWidth, RenderHeight));
if (skeletons.Length != 0)
{
foreach (Skeleton skel in skeletons)
{
RenderClippedEdges(skel, dc);
if (skel.TrackingState == SkeletonTrackingState.Tracked)
{
this.DrawBonesAndJoints(skel, dc);
}
else if (skel.TrackingState == SkeletonTrackingState.PositionOnly)
{
dc.DrawEllipse(
this.centerPointBrush,
null,
this.SkeletonPointToScreen(skel.Position),
BodyCenterThickness,
BodyCenterThickness);
}
}
}
// prevent drawing outside of our render area
this.drawingGroup.ClipGeometry = new RectangleGeometry(new Rect(0.0, 0.0, RenderWidth, RenderHeight));
}
}
What I want to do now is to track the coordinates of users right hand for gesture recognition,
Here is how I am planning to get the job done:
Start the gesture
Draw the circled gesture, Make sure to store the coordinates for start and then keep noting the coordinates for every 45 degree shift of the Joint from the start, for 8 octants we will get 8 samples.
For making a decision that a circle was drawn we can just check the relation ship between the eight samples.
Also, in the depthimage I want to show the locus of the drawn gesture, so as the handpoint moves it leaves a trace behind so at the end we will get a figure which was drawn by an user. I have no idea how to achieve this.
Coordinates for each joint are available for each tracked skeleton during each SkeletonFrameReady event. Inside your foreach loop...
foreach (Skeleton skeleton in skeletons) {
// get the joint
Joint rightHand = skeleton.Joints[JointType.HandRight];
// get the individual points of the right hand
double rightX = rightHand.Position.X;
double rightY = rightHand.Position.Y;
double rightZ = rightHand.Position.Z;
}
You can look at the JointType enum to pull out any of the joints and work with the individual coordinates.
To draw your gesture trail you can use the DrawContext you have in your example or use another way to draw a Path onto the visual layer. With your x/y/z values, you would need to scale them to the window coordinates. The "Coding4Fun" library offers a pre-built function to do it; alternatively you can write your own, for example:
private static double ScaleY(Joint joint)
{
double y = ((SystemParameters.PrimaryScreenHeight / 0.4) * -joint.Position.Y) + (SystemParameters.PrimaryScreenHeight / 2);
return y;
}
private static void ScaleXY(Joint shoulderCenter, bool rightHand, Joint joint, out int scaledX, out int scaledY)
{
double screenWidth = SystemParameters.PrimaryScreenWidth;
double x = 0;
double y = ScaleY(joint);
// if rightHand then place shouldCenter on left of screen
// else place shouldCenter on right of screen
if (rightHand)
{
x = (joint.Position.X - shoulderCenter.Position.X) * screenWidth * 2;
}
else
{
x = screenWidth - ((shoulderCenter.Position.X - joint.Position.X) * (screenWidth * 2));
}
if (x < 0)
{
x = 0;
}
else if (x > screenWidth - 5)
{
x = screenWidth - 5;
}
if (y < 0)
{
y = 0;
}
scaledX = (int)x;
scaledY = (int)y;
}

How to scroll axis scale outside the graph of Zedgraph using the mouse event

Is it possible that the axis scale outside the graph could be scale using the mouse event "mouse_down and hold" and move up or down in y-axis the same with the x-axis move left or right? ex. when I trigger MouseDownEvent and hold the x-axis scale 0.6 or at the space along with that scale and move it to the right, scale should scroll depend in the chartfraction? could you post an example? Thanks in advance!
Separately panning and zooming Y axises can be achieved using the mouse events of ZedGraph: MouseDownEvent, MouseMoveEvent, MouseUpEvent and MouseWheel events (credits go to a colleague of mine).
It works with multiple GraphPanes and multiple Y axises.
The MouseMoveEvent is used to shift the Min and the Max of an Y axis when the mouse is moved while its button is pressed. If not, it is used to get the reference of the Y axis object the mouse is hovering on.
The MouseDownEvent is used to initiate an axis pan operation.
The MouseWheel is used to perform a zoom on an Y axis.
And the MouseUpEvent is used to clean things when zooming and panning operations are finished.
Here is the code :
// The axis that is currently hovered by the mouse
YAxis hoveredYAxis;
// The graphpane that contains the axis
GraphPane foundPane;
// The scale of the axis before it is panned
double movedYAxisMin;
double movedYAxisMax;
// The Y on the axis when the panning operation is starting
float movedYAxisStartY;
void z_MouseWheel(object sender, MouseEventArgs e)
{
if (hoveredYAxis != null)
{
var direction = e.Delta < 1 ? -.05f : .05f;
var increment = direction * (hoveredYAxis.Scale.Max - hoveredYAxis.Scale.Min);
var newMin = hoveredYAxis.Scale.Min + increment;
var newMax = hoveredYAxis.Scale.Max - increment;
hoveredYAxis.Scale.Min = newMin;
hoveredYAxis.Scale.Max = newMax;
foundPane.AxisChange();
z.Invalidate();
}
}
bool z_MouseUpEvent(ZedGraphControl sender, MouseEventArgs e)
{
hoveredYAxis = null;
return false;
}
bool z_MouseMoveEvent(ZedGraphControl sender, MouseEventArgs e)
{
var pt = e.Location;
if (e.Button == System.Windows.Forms.MouseButtons.Left)
{
if (hoveredYAxis != null)
{
var yOffset = hoveredYAxis.Scale.ReverseTransform(pt.Y) - hoveredYAxis.Scale.ReverseTransform(movedYAxisStartY);
hoveredYAxis.Scale.Min = movedYAxisMin - yOffset;
hoveredYAxis.Scale.Max = movedYAxisMax - yOffset;
sender.Invalidate();
return true;
}
}
else
{
var foundObject = findZedGraphObject(null);
hoveredYAxis = foundObject as YAxis;
if (hoveredYAxis != null)
{
z.Cursor = Cursors.SizeNS;
return true;
}
else
{
if (z.IsShowPointValues)
{
z.Cursor = Cursors.Cross;
return false;
}
else
{
z.Cursor = Cursors.Default;
return true;
}
}
}
return false;
}
bool z_MouseDownEvent(ZedGraphControl sender, MouseEventArgs e)
{
if (e.Button == System.Windows.Forms.MouseButtons.Left)
{
if (hoveredYAxis != null)
{
movedYAxisStartY = e.Location.Y;
movedYAxisMin = hoveredYAxis.Scale.Min;
movedYAxisMax = hoveredYAxis.Scale.Max;
return true;
}
}
return false;
}
This is a helper that factorizes a bit the object find operations of ZedGraph.
object findZedGraphObject(GraphPane pane = null)
{
var pt = zgc.PointToClient(Control.MousePosition);
if (pane == null)
{
foundPane = zgc.MasterPane.FindPane(pt);
if (foundPane != null)
{
object foundObject;
int forget;
using (var g = zgc.CreateGraphics())
if (foundPane.FindNearestObject(pt, g, out foundObject, out forget))
return foundObject;
}
}
return null;
}
If I understand your question correctly, here's my response:
zedgraph has got an in-built function called "Pan", you could change the scale of x & y axis.
Place the cursor within the 'chart area'
Hold the 'ctrl' button & move the mouse towards x & y directions to change the scale.
you could get back to original state by 'Un-Pan' (Context Menu)
Cheers..:)
Do You want to create a ScrollBar?
zedGraphControl1.IsShowHScrollbar = true;
//Set borders for the scale
zedGraphControl1.GraphPane.XAxis.Scale.Max = Xmax;
zedGraphControl1.GraphPane.XAxis.Scale.Min = Xmin;

MonoDevelop.Components.Docking - Tabbed DockGroupType issue

Our application uses the MonoDevelop.Components.Docking framework in our
Windows application. We last updated to the latest version in November
2010. I have come across some interesting behavior that occurs in the
following situation:
Press the auto hide button of the first panel in a
DockGroupType.Tabbed ParentGroup
Hold mouse over collapsed panel until it expands
Drag panel into center of the tabbed group (back to original
spot) and drop
At this point the panel resizes to the size of the blue rectangle that
showed where the panel would be dropped, and then undocks from the main
window to float at that size. This only happens on the first item in a
tabbed group. I found a commented out section of code in
DockGroupItem.cs (line 112, GetDockTarget(..)) that seems as though it
might deal with this. However, it references a DockPosition type that is
not defined, CenterAfter. The method is below, with the commented out
portion in bold:
public bool GetDockTarget (DockItem item, int px, int py, Gdk.Rectangle rect, out DockDelegate dockDelegate, out Gdk.Rectangle outrect)
{
dockDelegate = null;
if (item != this.item && this.item.Visible && rect.Contains (px,py)) {
int xdockMargin = (int) ((double)rect.Width * (1.0 - DockFrame.ItemDockCenterArea)) / 2;
int ydockMargin = (int) ((double)rect.Height * (1.0 - DockFrame.ItemDockCenterArea)) / 2;
DockPosition pos;
/* if (ParentGroup.Type == DockGroupType.Tabbed) {
rect = new Gdk.Rectangle (rect.X + xdockMargin, rect.Y + ydockMargin,rect.Width - xdockMargin*2, rect.Height - ydockMargin*2);
pos = DockPosition.CenterAfter;
}
*/ if (px <= rect.X + xdockMargin && ParentGroup.Type != DockGroupType.Horizontal) {
outrect = new Gdk.Rectangle (rect.X, rect.Y, xdockMargin, rect.Height);
pos = DockPosition.Left;
}
else if (px >= rect.Right - xdockMargin && ParentGroup.Type != DockGroupType.Horizontal) {
outrect = new Gdk.Rectangle (rect.Right - xdockMargin, rect.Y, xdockMargin, rect.Height);
pos = DockPosition.Right;
}
else if (py <= rect.Y + ydockMargin && ParentGroup.Type != DockGroupType.Vertical) {
outrect = new Gdk.Rectangle (rect.X, rect.Y, rect.Width, ydockMargin);
pos = DockPosition.Top;
}
else if (py >= rect.Bottom - ydockMargin && ParentGroup.Type != DockGroupType.Vertical) {
outrect = new Gdk.Rectangle (rect.X, rect.Bottom - ydockMargin, rect.Width, ydockMargin);
pos = DockPosition.Bottom;
}
else {
outrect = new Gdk.Rectangle (rect.X + xdockMargin, rect.Y + ydockMargin, rect.Width - xdockMargin*2, rect.Height - ydockMargin*2);
pos = DockPosition.Center;
}
dockDelegate = delegate (DockItem dit) {
DockGroupItem it = ParentGroup.AddObject (dit, pos, Id);
it.SetVisible (true);
ParentGroup.FocusItem (it);
};
return true;
}
outrect = Gdk.Rectangle.Zero;
return false;
}
I have tried a few small things, but nothing as affected the behavior so
far. Any ideas on what I could edit to get this working properly?
Thanks!
To fix the problem above I added a check to see if the item being docked is the same as the first item in the tab group, if so, modifies the insertion index appropriately because trying to insert the item before itself in the group causes the float problem. Since its status was "AutoHide" it is still technically visible, so was kept in the tab group's list of visible objects. Changes are below.
DockGroup.cs (line 122) - commented out the index increase:
public DockGroupItem AddObject (DockItem obj, DockPosition pos, string relItemId)
{
...
else if (pos == DockPosition.CenterBefore || pos == DockPosition.Center) {
if (type != DockGroupType.Tabbed)
gitem = Split (DockGroupType.Tabbed, pos == DockPosition.CenterBefore, obj, npos);
else {
//if (pos == DockPosition.Center) // removed to fix issue with drag/docking the 1st tab item after autohiding
//npos++;
gitem = new DockGroupItem (Frame, obj);
dockObjects.Insert (npos, gitem);
gitem.ParentGroup = this;
}
}
ResetVisibleGroups ();
return gitem;
}
DockGroup.cs (line 912) - added check for same item
internal override bool GetDockTarget (DockItem item, int px, int py, out DockDelegate dockDelegate, out Gdk.Rectangle rect)
{
if (!Allocation.Contains (px, py) || VisibleObjects.Count == 0) {
dockDelegate = null;
rect = Gdk.Rectangle.Zero;
return false;
}
if (type == DockGroupType.Tabbed) {
// this is a fix for issue with drag/docking the 1st tab item after autohiding it
int pos = 0;
if (item.Id == ((DockGroupItem)VisibleObjects[0]).Id)
{
pos++;
}
// Tabs can only contain DockGroupItems
return ((DockGroupItem)VisibleObjects[pos]).GetDockTarget (item, px, py, Allocation, out dockDelegate, out rect);
}
...