When would you us Ext.application() vs. Ext.Loader.setConfig, .require, and .onReady? - extjs4

I see that some of the examples included with ExtJS 4 start up via a single call to Ext.application(). Other examples, however, manually call Ext.Loader.setConfig(), Ext.require(), and Ext.onReady() instead. I want to make sure I understand the difference.
Am I correct in assuming that:
you'd normally use the convenient Ext.application() call for a full-screen (e.g., Viewport-based) app?
if you just want to use a few ExtJS components on a pre-existing "non-Ext" page you'd opt for the manual calls to Ext.Loader, require, and onReady()?
Thanks for the clarification!

The full application call is used for the Ext MVC approach, and comes with a set of conventions to pre-load additional components, for example the stores and views options in the controller classes. For a better explanation see the Ext documentation on MVC.
If you just need to throw a few components on the page, as you state, you will get better performance just using the loader, or better, avoiding dynamic loading (at least in production).

Related

Creating Cytoscape.js extensions

Max, I want to update my extension to the new format, but I am running into issues with placement of custom code. It seems that the extension framework has been updated a lot since I added an extension 4 years ago. Is there a way to get better documentation on getting started with adding a extension? I am happy to help write up the documentation if you can help answer some questions that I think would help get people started. Let me know.
The only thing that really changed is that the scaffolder creates a webpack project for you. The extension registering procedure is the same: http://js.cytoscape.org/#extensions/api
For example, cytoscape( 'collection', 'fooBar', function(){ return 'baz'; } ) registers eles.fooBar().
I guess the main thing is that there are a lot more files than what the previous scaffolder generated, so it might be harder to find things. The layout output has lots of files, because it creates a skeleton impl for each of the continuous case and the discrete case.
The scaffolder isn't strictly necessary. You could use another build system (or none at all) as long as you call cytoscape(). For example, if you only care about publishing to npm for people who use webpack/browserify/rollup, then you could just use cjs require('cytoscape') to pull in the peer dependency. Exporting a register function is nice if you want to allow the client to decide the order of extension registrations with cytoscape.use(extension) (or extension(cytoscape)).
You're right that there should be some more docs on the output of the scaffolder. Maybe a summary of the files would suffice. We could add a tutorial in the blog later if need be. Both the docs and the blog just use markdown, so the content could go in either place.

Asp.Net Core validation without jQuery

Is there any mode to use Asp.Net Core client side validation without jquery + jquery validation + jq unobtrusive validation?
I mean it's 2017, every browser can handle lots of HTML5 input validators without JavaScript.
I created a library that is compatible with the original validation library but is much smaller and does not require jQuery. You can use it as a drop-in replacement if you want to reduce payload size of your assets:
https://github.com/kraaden/unobtrusive-validation
I don't know of any package that turns TagHelper for input fields into html5 element tags. You could probably quite easily make one. Check out the source code of the tag helpers: https://github.com/aspnet/AspNetCore/blob/master/src/Mvc/Mvc.TagHelpers/src/InputTagHelper.cs
Advise
Best approach would be to make sure you receive a json error object when validation fails and process this in javascript to show errrors (maybe show which fields fail). make a generic function you can use for any form.
Later manually fine tune/add additional tags/validation if needed in frontend. but that shouldn't be part of your MVP (minimal viable product).
old answer
Depends on what kind of validation you are looking for.
jQuery validation in combination with data annotations is the easiest one and the default one. Although html 5 can tackle quite some extensive tests. it can't handle any indepth stuff like properties being dependent on eachother.
Another option is to use the Popular FluentValidation. Validate only serverside but use Ajax and a small client side framework to show the errors
The last option would be to use Html5 input validations. This can only be partly achieved by default using dotnet types or some annotations: http://www.davepaquette.com/archive/2015/05/13/mvc6-input-tag-helper-deep-dive.aspx. You'd have to extend the taghelpers yourself if you'd want to add more in depth html 5 attribute support.
Too bad you can't use the same frontend and serverside code (like nodejs could) but for big projects i'd advise you to take option 2. it might not sound very snappy but dotnet is generally pretty fast and if you use FluentValidation in combination with Ajax Api calls no one will notice validation actually serverside. In the end i found data annotations always lack proper possibilities to test properties dependent on each other and it gets very hacky very fast. The only downside of this option is that you split the validation from the input-model and thus not always the validation is directly visible for external developers.

is there any way to prevent a view from being unbound on deactivation?

I find, even when assigning the decorator #singleton(false) to the view-model, that while the view-model does then persist as a singleton across activation/deactivation, the bindings and components, etc do not.
(I assume this is because they are stored in a container that is disposed on deactivation.)
The result is that upon every deactivation/activation of a view with a singleton view-model, the view is un-bound and then re-bound.
Is it possible to cause the bindings to persist across deactivation/activation?
This one stumped me for a good while. It also confused me as to why implementing it was not a higher priority for the Aurelia Team.
This takes a fair amount of code to get working. I have put the code in a Gist here: https://gist.github.com/Vaccano/1e862b9318f4f0a9a8e1176ff4fb727e
All the files are new ones except the last, which is a modification to your main.ts file. Also, all my code is in Typescript. If you are using Javascript you will have to translate it.
The basic idea behind it is that you cache the view and the view model. And your replace the router with a caching router. So when the user navigates back to your page, it checks first to see if there is a view/view model already created and uses that instead of making a new one.
NOTE: This is not "component" level code. That means I have tested that this works for the scenario that I use it for.
I got my start for making this change here: https://github.com/aurelia/router/issues/173 There is another user (Scapal) made something that works for him and posted it there. If what I am posting does not work for you, his may help you out.
i'm also interested in an answer to this.
maybe i'm now making a complete fool out of me but why not use aurelia-history navigate(..) command?

Access closure property names in the content block at runtime

I want to evaluate my content blocks before running my test suite but the closures' property names is in bytecode already. I'm ooking for the cleanest solution (compared with parsing source manually).
Already tried solution outlined in this post (and I'd still wind up doing some RegEx/parsing) but could only get it to work via script execution engine. It failed in IDE and GroovyConsole. Rather than embedding a Groovy script in project's code, I thought I'd try using Geb's native classes.
Is building on the suggestion about extending Geb Navigators here viable for Geb's PageContentSupport class whose contentTemplates contain a LinkedHashMap of exactly what I need? If yes, would someone provide guidance? If no, any suggestions?
It is currently not possible to get hold of all content elements for a given page/module. Feel free to create an issue for this in Geb's bug tracker, but remember that all that Geb can provide is either a list of content element names or a map from these names to closures that create these elements.
Having that information isn't a generic solution to your problem because it's possible for content elements to take parameters and there are situations where your content elements will be available on the page only after some other actions are performed (for example you have to click on button to reveal a section of a page that uses ajax to retrieve it's content). So I'm afraid that simply going over all elements and checking if they don't throw any errors will not cut it.
I'm still struggling to see what would "evaluating" all content elements prior to running the suite buy you. Are you after verifying that your content elements still work to get a faster feedback than running the whole suite? I'm pretty sure that you won't be able to fully automate detection of content definitions that don't work anymore. In my view it will be more effort than it's worth.

Rails3 Engine helper over-ride

So I have a Rails 3.0 Engine (gem).
It provides a controller at app/controllers/advanced_controller.rb, and a corresonding helper at app/helpers/advanced_helper.rb. (And some views of course).
So far so good, the controller/helper/views are just automatically available in the application using the gem, great.
But I want to let the local application selective over-ride helper methods from AdvancedHelper in the engine (and ideally be able to call 'super'). That's a pretty reasonable thing to want to allow, right, a perfectly reasonable (and I'd think common) design?
Problem is, I can't seem to find any way to make it work. If the application defines it's own app/helpers/advanced_helper.rb (AdvancedHelper), then the one from the engine never gets loaded at all -- so that would work if you wanted to replace ALL the helper methods in there (without calling super), but not if you just want to over-ride one.
So that kind of makes sense actually, so I pick a different name. Let's call my local one ./app/helpers/local_advanced_helper.rb (LocalAdvancedHelper). This helper DOES get loaded, if I put a method in there that wasn't in the original engine's AdvancedHelper, it is available to views.
But if I put a method in there with the same name as one in the engine's AdvancedHelper... my local one NEVER gets called. It's like the AdvancedHelper (from engine) is earlier in the call chain than the LocalAdvancedHelper (from app). Indeed, if I turn on the debugger, and look at helpers.ancestors, that's exactly what's going on, they're in the reverse order I'd want in the ancestor chain. So AdvancedHelper (from engine) could theoretically call 'super' to call up to LocalAdvancedHelper (from app) -- but that of course wouldn't make a lot of sense to do, you'd never want to do that.
But what I would want to do... I can't do.
Anyone have any ideas, is there any way to provide this design, which seems perfectly reasonable to me, where an app can selectively over-ride a helper method from an Engine?
Anyone have any explanation of why it's working the way it is? I tried looking at actual Rails source code, but got lost pretty quick, the code around this stuff is awfully abstract split amongst a bunch of places.
This is pretty esoteric question, I'm pessimistic anyone will have any ideas, I hope you surprise me!
== Update
Okay, in order to understand what part of Rails code is being called where, I put a "def self.included ; debugger ; end" on each of my helpers, then in the debugger I can raise an exception to see a stack trace.
That still isnt' really helping me get to the bottom of it, the Rails code jumps all over the place and is pretty confusing.
But it's clear that a helper with the 'standard' name (ie WidgetHelper for WidgetController) is called by different rails code, to include in the 'master' view helper module for a given controller, than other helpers are. I'm wondering if I give the helper a different name, then manually include it in my controller with ("helper OtherNamedAdvancedHelper"), if that will change the load order.
We can use Module#class_eval to override.
In main app,
MountedEngineHelper.class_eval do
def advanced_helper
...
end
end
In this way other methods defined in engine helper are still available.
Thanks for your elaboration. I think this really is a problem. And it is still present in Rails 3.2.3, so I filed an issue.
The least-smelling workaround I came up with is to do a "half alias method chain":
module MountedEngineHelper
def advanced_helper
...
end
end
module MyHelper
def advanced_helper_with_extra_behavior
advanced_helper
extra_behavior
end
end
The obvious drawback is that you have to change your templates so that your helper is called. At least, you make the existence of extra behavior explicit there.
These release notes from Rails4 seem enticingly related to this problem, and potentially note it's been fixed:
http://edgeguides.rubyonrails.org/upgrading_ruby_on_rails.html#helpers-loading-order