Vertical scroll a textfield by pixels in Actionscript 2 - actionscript-2

Is there a way to vertical scroll a textfield in actionscript 2 by pixels instead of line by line?

i am afraid, the answer is simply "no" ... there is no API to expose that functionality ... you could make the textfield autosizable, so scrolling is disabled, and then mask it ... but then tracking all the user input that will cause a text field to scroll, will be a hell of a job (most notably, cursor movement ... well tracking where the cursor is, is not too hard (i.e. it's easy to track the character index), but calculating the resulting coordinates is a hell of a job) ...
little side note: i've noticed, that you work with AS2 a lot ... i'd personally advise you to move on to Haxe or AS3 ... Haxe for the sake of it's richness as a language (and you could still be targeting flash player 8 if you need to), and AS3 for the sake of a much better API (there, you have a call to get the coords of a character) and the drastically higher execution speed ... to me, there is just one advantage of AS2, which is, that you can extend the language a lot at runtime ... compiling AS3 in ECMA compatibility mode allows you to do likewise with AS3 ... you will lose some of that speed of course, but still be faster than with AS2 ...

Related

How did they make the controls in "Fast Like a Fox"?

I am trying to make a basic rhythm game in Godot, but with unique controls. A few years ago, I played a cool game called Fast Like a Fox. The controls were unique, because you tapped on the back of your device to move your character to move, not on the screen. I thought the controls were cool, and I want to try to replicate them in a simple one-button rhythm game for mobile. Does anyone know if it would be possible for Godot to take that kind of input, either in a built-in function or something else?
They read the accelerometer (and maybe other sensors), which Godot supports through accelerometer, gravity and gyroscope. Accelerometers are accurate enough to read passwords as they're being typed so you can even get a rough estimate on where the user is tapping, which is used in Fast Like a Fox use case where internally they poll the sensor and raise an event when particular changes happen in one or multiple axes. In your case, it might be enough to just treat any sudden changes as an event if you simply care about the user tapping anything.
Try writing an app that will display the delta of each axis measurement then tap your phone around, you'll figure it out. Remember to test on various conditions (device being held upside down while laying on a bed, sitting on a chair, laying on one's side, etc) since different axes will register the changes.

Kivy: Depth Oder not so in Depth

Now I could be wrong about this but after testing it all day, I have discovered...
When adding a widget and setting the z-index, the value "0" seems to be the magic depth.
If a widget's Z is at 0, it will be drawn on top of everything that's not at 0, Z wise.
It doesn't matter if a widget has a z-index of 99, -999, 10, -2 or what ever... It will not appear on top of a widget who's z-index is set to 0.
It gets more strange though...
Any index less than -2 or greater than 2 seems to create an "index out of range" error. Funny thing is...when I was working with a background and sprite widget, the background's Z was set to 999 and no errors. When I added another sprite widget, that's when the -2 to 2 z-index limitation appeared.
Yeah I know...sounds whacked!
My question is, am I right about "0" being the magic Z value?
If so, creating a simple 23D effect like making a sprite move being a big rock will take some unwanted code.
Since you can only set Z when adding,a widget, one must remove and immediately add back, with the new Z value...a widget.
You'll have to do this with the moving sprite and the overlapping object in question. Hell, I already have that code practically written but I want to find out from Kivy pros, is there a way to set z-index without removing and adding a widget.
If not, I'll have to settle for the painful way.
My version of Kivy is 1.9.0
What do you mean by z-order? Drawing order is determined entirely by order of widgets being added to the parent, and the index argument to add_widget is just a list index at which the widget will be inserted. The correct way to change drawing order amongs widgets is to remove and add them (actually you can mess with the canvases manually but this is the same thing just lower level, and not a better idea).
I found a working solution using basic logic based on the fact widgets have to be removed and added again in order to control depth/draw order.
I knew the Main Character widget had to be removed along with the object in question...so I created a Main Character Parent widget, which defines and control the Main Character, apart from its Graphic widget.
My test involves the Main Character walking in front of a large rock, then behind it...creating a 23D effect.
I simply used the "y-" theory along with widget attach and detach code to create the desired effect.
The only thing that caught me off guard was the fact my Graphic widget for my Actor was loading textures. That was a big no no because the fps died.
Simple fix, moved the texture loading to the Main Character Parent widget and the loading is done once for all-time.
PS, if anyone knows how to hide the scrollbars and wish to share that knowledge, it'll be much appreciated. I haven't looked for an API solution for it yet but I will soon.
Right now I'm just trying to make sure I can do the basic operations necessary for creating a commercial 23D game (handhelds).
I'm a graphic artist and web developer so coming up with lovely visuals won't be an issue. I'm more concerned with what'll be "under the hood" so to say. Hopefully enough, lol.

Create mock 3D "space" with forwards and backwards navigation

In iOS, I'd like to have a series of items in "space" similar to the way Time Machine works. The "space" would be navigated by a scroll bar like feature on the side of the page. So if the person scrolls up, it would essentially zoom in in the space and objects that were further away will be closer to the reference point. If one zooms out, then those objects will fade into the back and whatever is behind the frame of refrence will come into view. Kind of like this.
I'm open to a variety of solutions. I imagine there's a relatively easy solution within openGL, I just don't know where to begin.
Check out Nick Lockwood's iCarousel on github. It's a very good component. The example code he provides uses a custom carousel style very much like what you describe. You should get there with just a few tweaks.
As you said, in OpenGL(ES) is relatively easy to accomplish what you ask, however it may not be equally easy to explain it to someone that is not confident with OpenGL :)
First of all, I may suggest you to take a look at The Red Book, the reference guide to OpenGL, or at the OpenGL Wiki.
To begin, you may do some practice using GLUT; it will help you taking confidence with OpenGL, providing some high-level API that will let you skip the boring side of setting up an OpenGL context, letting you go directly to the drawing part.
OpenGL ES is a subset of OpenGL, so essentially has the same structure. Once you understood how to use OpenGL shouldn't be so difficult to use OpenGL ES. Of course Apple documentation is a very important resource.
Now that you know a lot of stuff about OpenGL you should be able to easily understand how your program should be structured.
You may, for example, keep your view point fixed and translate the world (or viceversa). There is not (of course) a universal solution, especially because the only thing that matters is the final result.
Another solution (maybe equally good, it depends on your needs), may be to simply scale up and down images (representing the objects of your world) to simulate the movement through the object itself.
For example you may use an array to store all of your images and use a slider to set (increase/decrease) the dimension of your image. Once the image becomes too large for the display you may gradually decrease alpha, so that the image behind will slowly appear. Take a look at UIImageView reference, it contains all the API's you need for it.
This may lead you to the loss of 3-dimensionality, but it's probably a simpler/faster solution than learn OpenGL.

General considerations for NUI/touch interface

For the past few months I've been looking into developing a Kinect based multitouch interface for a variety of software music synthesizers.
The overall strategy I've come up with is to create objects, either programatically or (if possible) algorithmically to represent various controls of the soft synth. These should have;
X position
Y position
Height
Width
MIDI output channel
MIDI data scaler (convert x-y coords to midi values)
2 strategies I've considered for agorithmic creation are XML description and somehow pulling stuff right off the screen (ie given a running program, find xycoords of all controls). I have no idea how to go about that second one, which is why I express it in such specific technical language ;). I could do some intermediate solution, like using mouse clicks on the corners of controls to generate an xml file. Another thing I could do, that I've seen frequently in flash apps, is to put the screen size into a variable and use math to build all interface objects in terms of screen size. Note that it isn't strictly necessary to make the objects the same size as onscreen controls, or to represent all onscreen objects (some are just indicators, not interactive controls)
Other considerations;
Given (for now) two sets of X/Y coords as input (left and right hands), what is my best option for using them? My first instinct is/was to create some kind of focus test, where if the x/y coords fall within the interface object's bounds that object becomes active, and then becomes inactive if they fall outside some other smaller bounds for some period of time. The cheap solution I found was to use the left hand as the pointer/selector and the right as a controller, but it seems like I can do more. I have a few gesture solutions (hidden markov chains) I could screw around with. Not that they'd be easy to get to work, exactly, but it's something I could see myself doing given sufficient incentive.
So, to summarize, the problem is
represent the interface (necessary because the default interface always expects mouse input)
select a control
manipulate it using two sets of x/y coords (rotary/continuous controller) or, in the case of switches, preferrably use a gesture to switch it without giving/taking focus.
Any comments, especially from people who have worked/are working in multitouch io/NUI, are greatly appreciated. Links to existing projects and/or some good reading material (books, sites, etc) would be a big help.
Woah lots of stuff here. I worked on lots of NUI stuff during my at Microsoft so let's see what we can do...
But first, I need to get this pet peeve out of the way: You say "Kinect based multitouch". That's just wrong. Kinect inherently has nothing to do with touch (which is why you have the "select a control" challenge). The types of UI consideration needed for touch, body tracking, and mouse are totally different. For example, in touch UI you have to be very careful about resizing things based on screen size/resolution/DPI... regardless of the screen, fingers are always the same physical size and people have the same degreee of physical accuracy so you want your buttons and similar controls to always be roughly the same physical size. Research has found 3/4 of an inch to be the sweet spot for touchscreen buttons. This isn't so much of a concern with Kinect though since you aren't directly touching anything - accuracy is dictated not by finger size but by sensor accuracy and users ability to precisely control finicky & lagging virtual cursors.
If you spend time playing with Kinect games, it quickly becomes clear that there are 4 interaction paradigms.
1) Pose-based commands. User strikes and holds a pose to invoke some application-wide or command (usually brining up a menu)
2) Hover buttons. User moves a virtual cursor over a button and holds still for a certain period of time to select the button
3) Swipe-based navigation and selection. User waves their hands in one direction to scroll and list and another direction to select from the list
4) Voice commands. User just speaks a command.
There are other mouse-like ideas that have been tried by hobbyists (havent seen these in an actual game) but frankly they suck: 1) using one hand for cursor and another hand to "click" where the cursor is or 2) using z-coordinate of the hand to determine whether to "click"
It's not clear to me whether you are asking about how to make some existing mouse widgets work with Kinect. If so, there are some projects on the web that will show you how to control the mouse with Kinect input but that's lame. It may sound super cool but you're really not at all taking advantage of what the device does best.
If I was building a music synthesizer, I would focus on approach #3 - swiping. Something like Dance Central. On the left side of the screen show a list of your MIDI controllers with some small visual indication of their status. Let the user swipe their left hand to scroll through and select a controller from this list. On the right side of the screen show how you are tracking the users right hand within some plane in front of their body. Now you're letting them use both hands at the same time, giving immediate visual feedback of how each hand is being interpretted, and not requiring them to be super precise.
ps... I'd also like to give a shout out to Josh Blake's upcomming NUI book. It's good stuff. If you really want to master this area, go order a copy :) http://www.manning.com/blake/

Jerky/juttery (core-)animation in a screensaver?

I've built a screensaver for Leopard which utilises core-animation. It doesn't do anything overly complicated; uses a tree of CALayers and CATextLayers to produce a "table" of data in the following structure:
- root
› maincontainer
› subcontainer
› row [multiple]
› cell [multiple]
› text layer
At most there are 50 CALayers rendered on the screen at any one time.
Once I've built the "table", I'm adding animating the "subcontainer" into view using CABasicAnimation. Again, I'm not doing anything fancy - just a simple fade-in.
The problem is that while the animation does happen its painful to watch. It's jerky on my development machine which is a 3.06Ghz iMac with 4GB of RAM, and seems to chop the animation into 10 steps rather than showing a gradual change.
It gets worse on the ppc mac-mini the screensaver is targeted for; it refuses to even play the animation, generally "tweening" from the beginning of the animation (0% opacity) to half-way (50%) then completing.
I'm relatively new to ObjectiveC and my experience is based on using garbage-collected environments, but I can't believe I'm leaking enough memory at the point the screensaver starts to cause such problems.
Also, I'm quite sure its not a problem with the hardware. I've tested the built-in screensavers which use core-animation, and downloaded a few free CA-based for comparison, and they run without issue on both machines.
Information is pretty thin on Google with regards to using CA in screensavers, or using CA in general for that matter, and advice/tutorials on profiling/troubling screensavers seems to be non-existant. So any help the community can provide would be well welcomed!
--- UPDATE ---
Seems as though implicit animations help smooth things out a little. Still kinda jerky, but not as bad as trying to animate everything with explicit animations as in my solution.
There isn't much special about a screen saver. I assume you've started with the Core Animation Programming Guide? Running it through Instruments will give you a lot of information about where you're taking too much time.
The code you're using to do the fade-in would be useful. For what you're describing, you don't even need CABasicAnimation; you can just set the animatable properties of the layers, and they by default animate. Make sure you've read up on Implicit Animations. The rest of that page is probably of use as well.
Most of your job in CoreAnimation is getting out of the way. I generally knows what it's doing, and most problems come from second guessing it to trying to tell it too much.