Dojo/on. emits, widget.on etc. in Dojo 1.8/2.0 - dojo

I would like to gain a full understanding of how events work in Dojo. I am actually interested in the way Dojo 2.0 works -- I am using 1.8 now, but I am only really interested in using/documenting features that will not be deprecated for 2.0.
Now... in _WidgetBase.js I read:
on: function(/*String|Function*/ type, /*Function*/ func){
// For backwards compatibility, if there's an onType() method in the widget then connect to that.
// Remove in 2.0.
This basically means that in the near future a widget's on will basically do:
on: function(/*String|Function*/ type, /*Function*/ func){
// Otherwise, just listen for the event on this.domNode.
return this.own(on(this.domNode, type, func))[0];
Which is fine. Now... in the release note for 1.8, I see:
"Widget events, including attribute changes, are emitted as events on the DOM tree"
The release note point to this: http://livedocs.dojotoolkit.org/quickstart/events#widget-events-published-to-the-dom Which "sort of" explain things, although the document seems to be outdated (it still talks about aspect for "plain object").
So, my question: is there a spot/bunch of pages/tickets that would describe the current updated way in which the whole events thing works?
My current understanding (for Dojo 2.0):
on: 100% delegated to on.js
emit: when you run randomWidget.on('something', function(){}):
-if randomWidget has 'onsomething', it will simply run that; <--- will this go away with 2.0?
-Otherwise, it will delegate to on()
So, it's all about understanding dojo/on. That's when I get confused: reading the source code, on.js seems to delegate functionality to the widget itself (which... I just wrote above, will simply delegate to dojo/on from 2.0...?!?). Unless the bit that delegates to the object is destined to disappear...?
Also, I am used to writing widgets with templates, and then add items where I do data-dojo-attach-event="onclick:_click" where I make sure a function is called when somebody clicks on it. With the new on() system, will this change? (I mean, is all events propagates to DOM, is the opposite also true?)
So, can somebody shed some light on this? I feel a little uneasy at the moment, adding events and doing things, because I am not 100% sure of what is going on.
Thank you!

Gosh that was a while ago...
Since then, I wrote this:
https://github.com/mercmobily/writeups/blob/master/dojo/widgets_containers_on.md
Which explains pretty much the whole lot!
Merc.

Related

is there any way to prevent a view from being unbound on deactivation?

I find, even when assigning the decorator #singleton(false) to the view-model, that while the view-model does then persist as a singleton across activation/deactivation, the bindings and components, etc do not.
(I assume this is because they are stored in a container that is disposed on deactivation.)
The result is that upon every deactivation/activation of a view with a singleton view-model, the view is un-bound and then re-bound.
Is it possible to cause the bindings to persist across deactivation/activation?
This one stumped me for a good while. It also confused me as to why implementing it was not a higher priority for the Aurelia Team.
This takes a fair amount of code to get working. I have put the code in a Gist here: https://gist.github.com/Vaccano/1e862b9318f4f0a9a8e1176ff4fb727e
All the files are new ones except the last, which is a modification to your main.ts file. Also, all my code is in Typescript. If you are using Javascript you will have to translate it.
The basic idea behind it is that you cache the view and the view model. And your replace the router with a caching router. So when the user navigates back to your page, it checks first to see if there is a view/view model already created and uses that instead of making a new one.
NOTE: This is not "component" level code. That means I have tested that this works for the scenario that I use it for.
I got my start for making this change here: https://github.com/aurelia/router/issues/173 There is another user (Scapal) made something that works for him and posted it there. If what I am posting does not work for you, his may help you out.
i'm also interested in an answer to this.
maybe i'm now making a complete fool out of me but why not use aurelia-history navigate(..) command?

What exactly happens when you press the "play" button in Unity?

As soon as you hit "Play" what happens in the background of the software? The code is already compiled and ready at this point. So when I press "Play" the code gets executed. What other things occur along with this?
I have this question as an assignment and would really like to know. Thanks. :)
Actually everything is loaded by script. This graph explains the process. Also the links below can be useful for you to understand all the background process.
Execution Order of Event Functions
Overview: Script compilation
Asking what happens when you press Play is like asking Coke to reveal the drink recipe. This is what they sell. You got that as assignment, fact is you can say anything and your teacher would lie to tell you wrong, since he does not know either (except if he works for a company that bought the source code of the engine).
What you can say, is that the OpenGl/DirectX API is initialised, registration of all event to the OS like Input, application data and so on, then all the Engine functioning, registering of the needed classes in memory, init of the physics, parsing of the opening scene YAML file, creation of the content and placement in space, for each item, if a MonoBehaviour, registration of all callbacks, all the debug code related to profiler and stack tracing, crash reports and many more...
Those are the obvious ones and I cannot have any clue of what is going on without using a tool to decompose the code. Problem, it is against the EULA and then illegal.

Track all ObjC method calls?

Sometimes when looking at someone else's large Objective-C program, it is hard to know where to begin.
In such situations, I think it would be helpful to log every call to every non-Apple method.
Is there a way to do that? Basically, make one change in some central place, and log every method that is called. Preferably limited to non-Apple methods.
You can set the environment variable NSObjCMessageLoggingEnabled to YES. This will write a log of all message sends in the folder /tmp/msgSends-xxx.
You could add a symbolic breakpoint to objc_msgSend(), and have it log the second parameter without stopping.
How to do it for your own methods only though is a toucher task. Maybe if you could inspect the class name being called and do some magic to have a conditional breakpoint for only calls where the class' prefix matches your own?
I don't think logging every single call is practical enough to be useful, but here's a suggestion in that direction.
On a side note, if it's a large program, it better have some kind of documentation or an intro comment for people to get started with the code.
In any case, every Cocoa application has an applicationDidFinishLaunching... method. It's a good place to start from. Some apps also have their principal (or 'main window') class defined in the Info.plist file. Both these things might give you a hint as to what classes (specifically, view controllers) are the most prominent ones and what methods are likely to have long stack-traces while the program is running. Like a game-loop in a game engine, or some other frequently called method. By placing a breakpoint inside such a method and looking at the stack-trace in the debugger, you can get a general idea of what's going on.
If it's a UI-heavy app, looking at its NIB files and classes used in them may also help identify parts of app's functionality you might be looking for.
Another option is to fire up the Time Profiler instrument and check both Hide missing symbols and Hide system libraries checkboxes. This will give you not only a bird's eye view on the methods being called inside the program, but also will pin-point the most often called ones.
By interacting with your program with the Time Profiler recording on, you could also identify different parts of the program's functionality and correlate them with your actions pretty easily.
Instruments allows you to build your own "instruments", which are really just DTrace scripts in disguise. Use the menu option Instrument >> Build New Instrument and select options like which library you'd like to trace, what you'd like to record when you hit particular functions, etc. Go wild!
That's an interesting question. The answer would be more interesting if the solution supported multiple execution threads and there were some sort of call timeline that could report the activity over time (maybe especially with user events plotted in somehow).
I usually fire up the debugger, set a breakpoint at the main entry point (e.g. - applicationDidFinishLaunching:withOptions:) and walk it in the debugger.
On OSX, there are also some command-line tools (e.g. sample and heap) that can provide some insight.
It seems like some kind of integration with instruments could be really cool, but I am not aware of something that does exactly what you're wanting (and I want it now too after thinking about it).
If one were to log a thread number, and call address, and some frame details, it seems like the pieces would be there to plot the call timeline. The logic for figuring out the appropriate library (Apple-provided or third party) should exist in Apple's symbolicatecrash script.

Rails3 Engine helper over-ride

So I have a Rails 3.0 Engine (gem).
It provides a controller at app/controllers/advanced_controller.rb, and a corresonding helper at app/helpers/advanced_helper.rb. (And some views of course).
So far so good, the controller/helper/views are just automatically available in the application using the gem, great.
But I want to let the local application selective over-ride helper methods from AdvancedHelper in the engine (and ideally be able to call 'super'). That's a pretty reasonable thing to want to allow, right, a perfectly reasonable (and I'd think common) design?
Problem is, I can't seem to find any way to make it work. If the application defines it's own app/helpers/advanced_helper.rb (AdvancedHelper), then the one from the engine never gets loaded at all -- so that would work if you wanted to replace ALL the helper methods in there (without calling super), but not if you just want to over-ride one.
So that kind of makes sense actually, so I pick a different name. Let's call my local one ./app/helpers/local_advanced_helper.rb (LocalAdvancedHelper). This helper DOES get loaded, if I put a method in there that wasn't in the original engine's AdvancedHelper, it is available to views.
But if I put a method in there with the same name as one in the engine's AdvancedHelper... my local one NEVER gets called. It's like the AdvancedHelper (from engine) is earlier in the call chain than the LocalAdvancedHelper (from app). Indeed, if I turn on the debugger, and look at helpers.ancestors, that's exactly what's going on, they're in the reverse order I'd want in the ancestor chain. So AdvancedHelper (from engine) could theoretically call 'super' to call up to LocalAdvancedHelper (from app) -- but that of course wouldn't make a lot of sense to do, you'd never want to do that.
But what I would want to do... I can't do.
Anyone have any ideas, is there any way to provide this design, which seems perfectly reasonable to me, where an app can selectively over-ride a helper method from an Engine?
Anyone have any explanation of why it's working the way it is? I tried looking at actual Rails source code, but got lost pretty quick, the code around this stuff is awfully abstract split amongst a bunch of places.
This is pretty esoteric question, I'm pessimistic anyone will have any ideas, I hope you surprise me!
== Update
Okay, in order to understand what part of Rails code is being called where, I put a "def self.included ; debugger ; end" on each of my helpers, then in the debugger I can raise an exception to see a stack trace.
That still isnt' really helping me get to the bottom of it, the Rails code jumps all over the place and is pretty confusing.
But it's clear that a helper with the 'standard' name (ie WidgetHelper for WidgetController) is called by different rails code, to include in the 'master' view helper module for a given controller, than other helpers are. I'm wondering if I give the helper a different name, then manually include it in my controller with ("helper OtherNamedAdvancedHelper"), if that will change the load order.
We can use Module#class_eval to override.
In main app,
MountedEngineHelper.class_eval do
def advanced_helper
...
end
end
In this way other methods defined in engine helper are still available.
Thanks for your elaboration. I think this really is a problem. And it is still present in Rails 3.2.3, so I filed an issue.
The least-smelling workaround I came up with is to do a "half alias method chain":
module MountedEngineHelper
def advanced_helper
...
end
end
module MyHelper
def advanced_helper_with_extra_behavior
advanced_helper
extra_behavior
end
end
The obvious drawback is that you have to change your templates so that your helper is called. At least, you make the existence of extra behavior explicit there.
These release notes from Rails4 seem enticingly related to this problem, and potentially note it's been fixed:
http://edgeguides.rubyonrails.org/upgrading_ruby_on_rails.html#helpers-loading-order

How to get rid of unmanaged code in VW 3.1d and ENVY

I have an old VW3/ENVY image with a parcel loaded as unmanaged code (exactly the situation Mastering ENVY/DEVELOPER warns against). Unfortunately, this problem happened a long time ago and it's too late to just "go back" to an image without the parcel loaded.
Apparently, there is a way to solve this problem (we have one development image where this has been solved, and there are normal configuration maps that contain the exact same code as the unmanaged parcel but they can't be loaded), but the exact way has long since been forgotten (and there are some problems with taking that particular dev image as the base for a new runtime image, so I need to find out how how to do it again).
In theory, it should be possible to remove the parcel and reload the code from a configuration map. In practice, all normal ways (using the ParcelBrowser or directly calling UnmanagedCode>>remove) fail. I even tried manually removing the offending selectors from the method dictionary, but past a certain point (involving a call to #primBecome:) the whole image hangs completely (I can't even drop into the debugger). I started hacking the instances of the classes and methods, hoping I'd trick ENVY into thinking that these particular methods are normal versioned code, but without any success yet.
Are there any smalltalk/envy gurus around that still remember enough of VW 3 to provide me with any pointers?
Status update
After a week of trying to solve the problem I finally made it, at least partially, so in case anyone's interested...
First, I had to fix file pointers for the umnanaged code (otherwise, all everything that tried to touch the methods would throw an exception). It looks like ENVY extends Parcel so that, in theory, all integer file pointers are changed to ENVY's void filepointer when loaded, but in my case, I had to do it manually (a Parcel provides enumeration for all selectors it defines). Another way would be to tweak the filePointer code, but that can't easily be done automatically on every image where it's needed.
Then, the parcel can be discarded, which drops the parcel information, but keeps the code. The official "Discard" mechanism needs to have a valid changes file (which envy doesn't use so it has to be set manually, and reset afterwards) and the parcel source (which we fortunately had).
To be able to make any changes to the methods (either manually, or via loading an application or class from ENVY), they need to get rid of their unmanaged status. This can be done by manually tweaking TheClass>>applicationAssocs (I also got rid of all references to the classes in UnmanagedCode sich as timestamps, and removed the reference to the discarded parcel). I actually had some info on how to get to this point from my boss, but I haven't been able to understand the instructions until I almost figured it out by myself.
This finally allowed me to load and reload all the Applications that contained the classes. In theory. In practice, the image still hung completely whenever I tried to load a newer version of the Application (that contained the code formerly in the parcel).
It turned out that the crashes had absolutely nothing to do with the code being unmanaged, but with the fact that the parcel in question modified InputState>>process:, where it caused an exception due to a missing and/or uninitialized class variable (the InputState>>initialize method wasn't called until after the new process: method was in place). I had to modify the Notifier class to dump all exceptions to a file to find out what was going on. Adding the class variable to the source of the class (instead of adding it via reflection), suspending the input processing thread via toBeLoadedCode and starting it again in the loaded method and creating a new version of the application solved even this problem.
Now everything works, in theory. In practice it's still unusable, because reloading the WindowSystem or VisualworksBase applications causes their initialization blocks to run, and a whole lot of settings are reset to their defaults - fonts and font sizes, window colors, UI settings... And there doesn't seem to be any way to just save the settings to a file and load them later on, or just to see what all the settings are (either the official Settings menu doesn't show everything, or we have a heavily tweaked image... so much for reconstructing it from scratch). But that's a completely different question.
Well, normally the recommendation would be that you should be able to rebuild your development image from scratch by loading your code from the repository. But if you had that, then the answer would be simple, just discard that image and reload. I think it's been long enough that I've lost whatever knowledge I've had about how to mess with the internal structures to get it back, and it sounds like you've tried a lot of things. So, although it might be painful, figuring out the recipe to rebuild your development image by loading stuff from the repository sounds like it may be your best bet. It probably isn't all that horrible, there just might be a few dependencies on the image state, or special doits that need to be executed.
You also probably need to validate what's in the repository against what's in the image you're working from. If there was unmanaged code loaded and then someone modified it and saved it, it's not clear to me that it would have been saved to ENVY. So you probably want to audit everything that was unmanaged code and if it's been changed, save that to a repository edition.
Sorry I don't have any better answers.