What scripting language do I use for HTML5 Canvas in Animate CC? - jquery-animate

In Animate CC, in Create New, I selected HTML 5 Canvas. However, when I need to put some scripts to the project I don't know which language to use. I tried both ActionScript and Javascript but neither of them worked for me and since the software is new, I couldn't find a lot of online help.
For example, using JavaScript (I put all the scripts on one layer):
Frame 1, I put:
var count = 0;
alert(count);
Frame 2, I put:
count = count + 1;
alert(count);
Frame 20, I put:
this.gotoAndPlay(1); // go to frame 2 and play again
The first "alert (count)" on frame 1 worked, but "alert(count)" on frame 2 didn't kick in.
Thank you for your help.

I found the answer. Both JavaScript and ActionScript are supported in Animate CC. However, ActionScript is supported in ActionScript-based projects and JavaScript is supported in HTML5-based projects.
This missing part in my script is "this". I need to use "this.count" instead just "count". By default variable scope is valid only in their frame code so in order to make the variable accessible from other frames I need to use "this.variableName".
Frame 1: this.count = 0;
Frame 2: this.count++;
alert(this.count);
Frame 20: this.gotoAndPlay(1);

Related

I have a question about a YT tutorial because I wanna customize it a little

in this video, https://youtu.be/klBvssJE5Qg I shows you how to spawn enemies outside of a fixed camera. (this is in GDscript by the way) How could I make this work with a moving camera? I wanna make a zombie fighting game with a moving camera and zombies spawning outside that.
I would really appreciate help with this.
I've tried researching on the internet about how to do it, but I just didn't find it.
N/A..................................
After looking at the video, I see they are using this line to spawn:
Global.instance_node(enemy_1, enemy_position, self)
This suggest to me a couple thing:
The position is probably either relative to the self passed as argument or global.
There must be an Autoload called Global that I need to check to make sure.
And the answer is in another castle video.
In the video Godot Wave Shooter Tutorial #2 - Player Shooting we find this code:
extends Node
func instance_node(node, location, parent):
var node_isntance = node.instance()
parent.add_child(node_instance)
node_instance.global_position = location
return node_instance
And thus, we are working with global coordinates global_position. Thus enemy_position is used as global coordinates.
Ok, instead of using enemy_position as global coordinates we are going to use it as local coordinates of the Camera2D (or a child of it). Which means you need a reference to the Camera2D (which I don't know where do you have it).
You could make your code in a child of the Camera2D, or take the transform of the Camera2D using a RemoteTransform2D. Either way, you could then work in its local coordinates. Thus you would do this:
Global.instance_node(enemy_1, to_global(enemy_position), self)
Or you could have a reference by exporting a NodePath (or in the newest Godot you can export a Camera2D) from your script and set it via the inspector. So you can do this:
Global.instance_node(enemy_1, camera.to_global(enemy_position), self)
Where camera is your reference to the Camera2D.
In the following section of Arena.gd:
func _on_Enemy_spawn_timer_timeout():
var enemy_position = Vector2(rand_range(-160, 670), rand_range(-90, 390))
I believe you can add the X and Y coordinates of the camera to their corresponding random ranges in the enemy position Vector2. This will displace the enemy depending on where the camera is currently located.
You can get the position of the camera with this:
get_parent().get_node("Name of your camera").position
When this is all put together:
func _on_Enemy_spawn_timer_timeout():
var enemy_position = Vector2(rand_range(-160, 670) + get_parent().get_node("Name of your camera").position.x, rand_range(-90, 390) + get_parent().get_node("Name of your camera").position.y)
Keep in mind that you might need to displace the values in the following while loop as well. I hope this helps.

cannot access width/height properties of PImage object in setup()

I'm working with the PImage class. Normally I make 2 PImage objects, load an image into one of them (my input picture) and create a blank image using createImage(), which will become the output. I then use the loadPixels() method to access the data on the input, do some manipulation then set the respective output pixel to the result. I have not had any trouble with this so far.
The dimensions of the input and the output PImage objects need to be the same to make the pixel-by-pixel manipulations as straight forward as possible.
So here is the pickle:
PImage myinput;
PImage myoutput;
void setup() {
size(350, 350);
myinput = loadImage("myfile.jpg");
// the pic is 300 x 300
//myoutput = createImage(myinput.width, myinput.height, RGB);
//I've hardcoded the width and height below
myoutput = createImage(300, 300, RGB);
}
void draw() {
image(myoutput, 0, 0);
}
The result of the above is a black square 300 x 300 which overlaps a grey canvas of 350 x 350. Given the code I've written, this is the result I would expect.
Now, in the above example, I've hardcoded the width and height of 'myoutput' with the line:
myoutput = createImage(300, 300, RGB);
My question relates to the bit that follows:
Instead of hardcoding the values, I would rather do something like this:
myoutput = createImage(myinput.width, myinput.height, RGB);
But it isn't working. I just get a big 350 x 350 grey box. And I'm not sure why. Though I do have my suspicions. When I work with pictures in javascript, I've got wait for the page to load (using an event listener like window.onload() {} etc.) before I can access the width/height properties of the image.
UPDATE:
I saw another post which had the following:
/* #pjs preload="myfile.jpg"; */
So I just included this before I declared my PImage objects and now the following line works.
myoutput = createImage(myinput.width, myinput.height, RGB);
I'm quite confused by the new piece of code.
When you run your sketch in Java mode, you're running as Java. Java loads images synchronously, which means that the code won't continue running until the image is fully loaded. That's why it works in Java mode.
But when you're running using Processing.js, you're running as JavaScript. JavaScript loads images asynchronously, which means that the image is loaded in the background while your code continues. That means you aren't guaranteed that the image is done loading when the next line executes, which is why the image's width and height are unset.
The preload command tells Processing.js to load the images before the sketch starts executing, so that you're guaranteed that the image loads before you try to access its width and height.
From the Processing.js reference:
This directive regulates image preloading, which is required when using loadImage() or requestImage() in a sketch. Using this directive will preload all images indicated between quotes, and comma separated if multiple images are used, so that they will be ready for use when the sketch begins running. As resources are loaded via the AJAX approach, not using this directive will result in the sketch loading an image, and then immediately trying to use this image in some way, even though the browser has not finished downloading and caching it.

In Gimp script-fu, how can you access QuickMask functionality?

In the Gimp GUI, the QuickMask is very useful for many things, but this functionality doesn't seem to be directly available through script-fu. No obvious equivalents were apparent to me in the procedure browser.
In particular, putting the (value/gray) pixels of a layer into the selection mask is the basic thing I need to do. I tried using gimp-image-get-selection to get the selection channel's id number, then gimp-edit-paste into it, but the following anchor operation caused Gimp to crash.
My other answer contains the "theoretical" way of doing it - however, the O.P. found a bug in GIMP, as of version 2.6.5, as can be seem on the comments to that answer.
I got a workaround for what the O.P. intends to do: paste the contents of a given image layer to the image selection. As denoted, edit-copy -> edit-paste on the selection drawable triggers a program crash.
The workaround is to create a new image channel with the desired contents, through the copy and paste method, and then use gimp-selection-load to make the selection equal the channel contents:
The functions that need to be called are thus (I won't paste scheme code, as I am not proficient in all the parenthesis - I did the tests using the Python console in GIMP):
>>> img = gimp.image_list()[0]
>>> ch = pdb.gimp_channel_new(img, img.width, img.height, "bla", 0, (0,0,0))
>>> ch
<gimp.Channel 'bla'>
>>> pdb.gimp_edit_copy(img.layers[0])
1
>>> pdb.gimp_image_add_channel(img, ch, 0)
>>> fl = pdb.gimp_edit_paste(ch, 0)
> >> fl
<gimp.Layer 'Pasted Layer'>
>>> pdb.gimp_floating_sel_anchor(fl)
>>> pdb.gimp_selection_load(ch)
Using QuickMask through the User interface is exactly equivalent to draw on the Selection, treating the selection as a drawable object.
So, to use the equivalent of "quickmask" on script-fu all one needs to is to retrieve the Selection as a drawable and pass that as a parameter to the calls that will modify it -
And to get the selection, one just have to call 'gimp-image-get-selection'

Cursor position in a UITextView

I am looking for a non private way to find the position of the Cursor or Caret (blinking bar) in a UITextView preferably as a CGPoint.
There may be a question like this already but it does not provide a definitive way for doing it.
And,
I do not mean the NSRange of the selected area.
Just got it in another thread:
Requires iOS 3.2 or later.
CGPoint cursorPosition = [textview caretRectForPosition:textview.selectedTextRange.start].origin;
Remember to check that selectedTextRange is not nil before calling this method. You should also use selectedTextRange.empty to check that it is the cursor position and not the beginning of a text range. So:
if (textview.selectedTextRange.empty) {
// get cursor position and do stuff ...
}
Pixel-Position of Cursor in UITextView
SWIFT 2.1 version:
let cursorPosition = infoTextView.caretRectForPosition( (infoTextView.selectedTextRange?.start)! ).origin
print("cursorPosition:\(cursorPosition)")
There is no such way. Full stop.
The only thing you could do to come close is to in parallel lay out the text using CoreText, there calcualte the text position from a point and apply that on the UITextView. This, however, works with non-0 selections only. In addition CoreText has a different text layout engine, that e.g. supports kerning, which UITextView doesn't. This may result in deviations of the rendered and laid out text and thus give sub-optimal results.
There is absolutely no way to position the caret. This task is even very though if you do use private API. It's just one of the many "just don't" in iOS.
Swift 4 version:
// lets be safe, thus if-let
if let selectedTextRange = textView.selectedTextRange {
let caretPositionRect = textView.caretRect(for: selectedTextRange.start)
let caretOrigin = caretPositionRect.origin
print(">>>>> position rect: \(caretPositionRect), origin: \(caretOrigin)")
}
We simply use textView.selectedTextRange to get selected text range and than textView.caretRect(for:) to get CGRect defining position of the caret.

Alternative to CGPathGetPathBoundingBox() for iPad (iOS 3.2)

I'm trying to get my head around using QuartzCore to render semi-complex text/gradient/image UITableViewCell composites. Thankfully, Opacity will let me visually build the view and then spit out source code to drop in to cocoa touch. Trouble is, Opacity assumes the code is running on iOS 4, which is a problem if you want to draw Quartz views on an iPad.
For me, the offending method is CGPathGetPathBoundingBox ... would someone mind pointing me to a suitable alternative or workaround to this (presumably simple) method?
If you care to have some context (no pun intended), here you go:
transform = CGAffineTransformMakeRotation(1.571f);
tempPath = CGPathCreateMutable();
CGPathAddPath(tempPath, &transform, path);
pathBounds = CGPathGetPathBoundingBox(tempPath);
point = pathBounds.origin;
point2 = CGPointMake(CGRectGetMaxX(pathBounds), CGRectGetMinY(pathBounds));
transform = CGAffineTransformInvert(transform);
The alternative is to iterate on the points of the path and note down the leftmost, rightmost, upmost, and downmost co-ordinatesĀ of the anchor points yourself, then work out origin and size from those numbers.
You should wrap this in a function, and name it something like MyPathGetPathBoundingBox, and use that until you drop support for iOS 3.x. That will make it easy to switch to CGPathGetPathBoundingBox once you can.