Flash movie hangs up on a specific frame - flash-cs4

I have a cube where the sides link to more information on the timeline. The information has a close button that returns to the cube on the timeline. My problem is with the Contact 'side' of the cube. Clicking on contact goes to the contact information. Clicking the close button at Contact information occasionally causes the movie to hang up on contact 'side' of the cube. This is the only place where this occurs and it does not always occur.
Here is a link to the cube: http://www.worldwidego.org/dept404/Cube-test.html
I used Flash CS4 and AS3. The actionscript is mainly for the buttons - everything else is created on the timeline.
Any thoughts on what may be causing this would be appreciated. I can provide the fla file as I know the information here is pretty basic.

The cause of the problem was in a motion tween layer that was not visible at frame that was hanging up. I deleted that motion tween and recreated it on a new layer and the hang up was eliminated.

Related

Boxes are too small in properties view

In the properties view the boxes keep appearing really small, to the point where I can't see what's in them. In the image it is happening in front of the delay time but it is a general problem and whenever there are boxes to write in, this happens. I'm on ubuntu 20.04 and I've already reinstalled Anylogic but this error keeps on appearing
Image

Camera constraints on Verold

There is a problem that multiple users of my model have noticed, namely that when you right click the model (here), the movements are hypersensitive. Orbit and zoom are fine and steady, but pan now more often than not results on the model rapidly shooting off into the distance. I've been playing with the camera controls to no avail and I don't want to simply remove the pan option for the client.
Also, is there any way to transition between cameras without a fade, just a movement of the camera?
Also, Verold not working on Internet Explorer 11... any news?
Thanks
Solved: problem was the focal point (white lined sphere). Had been set off accidentally far off into the distance (can be easily done without noticing and there is no undo). Just brought it back to the object.

using kinect skeleton - no interest in wpf drawing

Good day,
I would like to take this opportunity to give my many thanks to the people of stackoverflow.com.
I have been new to coding, .net, over the past year, and I have always found stakoverflow to be a fantastic base of knowledge for learning. I spent the last couple weeks working, in depth, on a speech recognition project I am going to use with the upcoming release of Media Browser 3. Originally, the idea was to build a recognizer and have it control media. However as I moved through the different namespaces for speech recognition, it lead me into the realm of the Microsoft Kinect sensor. The more I use the kinect device, the more I would like to use some of the skeleton tracking it has to offer. Which leads me to my question.
I am not interested in build a WPF application that displays a window of what the kinect is seeing. This is part of a Form application, in which I would like to support only two of three gestures.
The idea is for it to watch for three gestures and simulate a key press on the keyboard.
So first I enable skeletonframe before the the audio for the recognizer, because I had read on here somewhere that enabling the skeleton after the audio canceled the audio for some reason.
Then I add some event handlers to my form.
I added skeletonFrameReady event.
I suppose my main questions would be, am I on the right track with skeleton tracking? Is it possible to do this from a form application without trying to draw th skeleton?
Thank you again,
I hope I made sense, sorry for my ignorance.
Ben
It is possible of course. For gesture recognition you can make a comparison between the positions of the joints (In the method that skeletonFrameReady event calls, which is called several times per second).
If you want to recognize complex gestures (like waving a hand), I suggest you take a look at this page http://blogs.msdn.com/b/mcsuksoldev/archive/2011/08/08/writing-a-gesture-service-with-the-kinect-for-windows-sdk.aspx and download the sample code there. (which is hidden in the last paragraph :)
Main idea is checking for predefined gesture segments with the correct order (If the segment1 is successful, look at segment2. If segment2 is paused, look at segment2 again until it is either successful or failed).
Hope this helps.

General considerations for NUI/touch interface

For the past few months I've been looking into developing a Kinect based multitouch interface for a variety of software music synthesizers.
The overall strategy I've come up with is to create objects, either programatically or (if possible) algorithmically to represent various controls of the soft synth. These should have;
X position
Y position
Height
Width
MIDI output channel
MIDI data scaler (convert x-y coords to midi values)
2 strategies I've considered for agorithmic creation are XML description and somehow pulling stuff right off the screen (ie given a running program, find xycoords of all controls). I have no idea how to go about that second one, which is why I express it in such specific technical language ;). I could do some intermediate solution, like using mouse clicks on the corners of controls to generate an xml file. Another thing I could do, that I've seen frequently in flash apps, is to put the screen size into a variable and use math to build all interface objects in terms of screen size. Note that it isn't strictly necessary to make the objects the same size as onscreen controls, or to represent all onscreen objects (some are just indicators, not interactive controls)
Other considerations;
Given (for now) two sets of X/Y coords as input (left and right hands), what is my best option for using them? My first instinct is/was to create some kind of focus test, where if the x/y coords fall within the interface object's bounds that object becomes active, and then becomes inactive if they fall outside some other smaller bounds for some period of time. The cheap solution I found was to use the left hand as the pointer/selector and the right as a controller, but it seems like I can do more. I have a few gesture solutions (hidden markov chains) I could screw around with. Not that they'd be easy to get to work, exactly, but it's something I could see myself doing given sufficient incentive.
So, to summarize, the problem is
represent the interface (necessary because the default interface always expects mouse input)
select a control
manipulate it using two sets of x/y coords (rotary/continuous controller) or, in the case of switches, preferrably use a gesture to switch it without giving/taking focus.
Any comments, especially from people who have worked/are working in multitouch io/NUI, are greatly appreciated. Links to existing projects and/or some good reading material (books, sites, etc) would be a big help.
Woah lots of stuff here. I worked on lots of NUI stuff during my at Microsoft so let's see what we can do...
But first, I need to get this pet peeve out of the way: You say "Kinect based multitouch". That's just wrong. Kinect inherently has nothing to do with touch (which is why you have the "select a control" challenge). The types of UI consideration needed for touch, body tracking, and mouse are totally different. For example, in touch UI you have to be very careful about resizing things based on screen size/resolution/DPI... regardless of the screen, fingers are always the same physical size and people have the same degreee of physical accuracy so you want your buttons and similar controls to always be roughly the same physical size. Research has found 3/4 of an inch to be the sweet spot for touchscreen buttons. This isn't so much of a concern with Kinect though since you aren't directly touching anything - accuracy is dictated not by finger size but by sensor accuracy and users ability to precisely control finicky & lagging virtual cursors.
If you spend time playing with Kinect games, it quickly becomes clear that there are 4 interaction paradigms.
1) Pose-based commands. User strikes and holds a pose to invoke some application-wide or command (usually brining up a menu)
2) Hover buttons. User moves a virtual cursor over a button and holds still for a certain period of time to select the button
3) Swipe-based navigation and selection. User waves their hands in one direction to scroll and list and another direction to select from the list
4) Voice commands. User just speaks a command.
There are other mouse-like ideas that have been tried by hobbyists (havent seen these in an actual game) but frankly they suck: 1) using one hand for cursor and another hand to "click" where the cursor is or 2) using z-coordinate of the hand to determine whether to "click"
It's not clear to me whether you are asking about how to make some existing mouse widgets work with Kinect. If so, there are some projects on the web that will show you how to control the mouse with Kinect input but that's lame. It may sound super cool but you're really not at all taking advantage of what the device does best.
If I was building a music synthesizer, I would focus on approach #3 - swiping. Something like Dance Central. On the left side of the screen show a list of your MIDI controllers with some small visual indication of their status. Let the user swipe their left hand to scroll through and select a controller from this list. On the right side of the screen show how you are tracking the users right hand within some plane in front of their body. Now you're letting them use both hands at the same time, giving immediate visual feedback of how each hand is being interpretted, and not requiring them to be super precise.
ps... I'd also like to give a shout out to Josh Blake's upcomming NUI book. It's good stuff. If you really want to master this area, go order a copy :) http://www.manning.com/blake/

Display something on the screen everytime action made

I have a problem not sure how to solve this. Hmm I am developing a game, a multi touch game, I already can make everything working fine, except a small issue that I want to show messages on the playing screen, each time the player makes actions. like his finger moves right the message says : "this finger moving right" nicely at the bottom of the screen, then if the finger move left, then it says the his finger moves left... something like that, can anyone show me how. I am using Cocos2D , it shall be much easier in Cocoa.
Thanks a alot for any help.
You'll probably need to be more specific with your question, but for now, here's a general answer:
Handling touch events on the iPhone and Handling touch ("trackpad") events on the Mac.
You'll receive and process the events per the above, then you'll display the results somehow. For testing, you'll probably just want to log the results to the console. For the final version, you might have a label or even a custom view that draws the "instruction" in some fancier way. If the latter is the case, you'll want to read up on custom views and drawing for whichever platform you're using (or both).