I'm wondering what is is the importance of ready() method in polymer and what is the meaning of on-time initialization for such method and what is the differences between read() and connectedCallback() ???
The following should help.
connectedCallback can be called more than once
...
Generally all setup work in connectedCallback should be balanced with associated teardown work in disconnectedCallback; if not it usually means connectedCallback is the wrong place to do it
...
At this point it's hard to contrive general a use case where ready is the only/best place to do work, although they do come up from time to time in very specific cases.
...
ready is called only once, so work done here based on a property value may become invalidated if the property value is changed at a later point by the user
See https://github.com/Polymer/polymer/issues/4108#issuecomment-258983943
They both have the same "goal". To provide a way of doing something after the element has been placed in the dom.
ready is polymer specific while connectedCallback is in the spec.
So if you have a vanilla webcomponent only connectedCallback is available. For Polymer Elements you may still use ready (especially if you convert an element from Polymer 1)
There are however some timing differences discribed here:
https://www.polymer-project.org/2.0/docs/upgrade#custom-elements-apis
The ready callback, for one-time initialization, signals the creation
of the element's shadow DOM. In the case of class-based elements, you
need to call super.ready() before accessing the shadow tree.
The major difference between 1.x and 2.0 has to do with the timing of
initial light DOM distribution. In the v1 shady DOM polyfill, initial
distribution of children into is asynchronous (microtask) to
creating the shadowRoot, meaning distribution occurs after observers
are run and ready is called. In the Polymer 1.0 shim, initial
distribution occurred before ready.
Related
Maybe I'm y bit too careful here. I create a component with createApp programmatically and mount it on a DOM node. Later, I remove the DOM node from the DOM tree.
Do I have to make sure to call unmount before removing the DOM node (with the mounted Vue3 app) in order to avoid a memory leak? Or is all data maintained in properties of the DOM node created during the mount? The documentation doesn't mention this explicitly (or I failed to find anything).
Explicit unmounting seems natural according to general resource allocation/deallocation rules. No need for explicit unmounting also seems natural as this is a garbage collecting environment.
Overview:
I refactoring script tests before I was used Enzyme to test, but now, I want to use #testing-library/react
Problem:
I can't find a solution for setState in #testing-library/react
Using setState is dangerous approach regardless testing library used.
It depends on implementation details(say, property names inside the state) so it becomes much harder to maintain tests - more tests to change, easy to get test broken when app is fine etc.
You cannot call that once you convert class component to functional one with hooks. So you depends on implementation details even more.
And finally direct state manipulation may end with state you would never get in real world. This means your component will be broken because it's impossible to reach some state but your tests with direct initialization will be fine.
So what you better do? Provide props, change props, call props(wrapper.find('button').filter(button => button.text() === 'Cancel').props().onClick() for enzyme, fireEvent.click(getByText(/Cancel/i)) for RTL) and verify against what's rendered.
This way your tests will be shorter, most actual and need less changes after you update component under test.
I tried to use Kotlin's coroutine channels, but got a warning about the code using an ObsoleteCoroutinesApi. Where is the replacement for the deprecated channels code?
As of today, no replacement yet exists for the Kotlin Coroutine Channels API. Despite confusing naming, they added this annotation to indicate that the existing API is being rewritten and will be replaced.
It's a warning that you can accept. If you have kotlinOptions.allWarningsAsErrors = true stopping you from building your app, you can simply add the #ObsoleteCoroutinesApi annotation to the top of the class to indicate that you accept the risk that your code will require changes.
However, this may quickly spiral out of control as you need to apply these markers to every class that uses these APIs and then every dependency that uses those classes, ad infinitum. To accept these risks project-wide, add the following to your gradle options:
kotlinOptions.freeCompilerArgs += [
"-Xuse-experimental=kotlinx.coroutines.ExperimentalCoroutinesApi",
"-Xuse-experimental=kotlinx.coroutines.ObsoleteCoroutinesApi"]
Feel free to update this answer when a replacement API exists.
VB in VS2008 under Windows 7 (64):
I need to change the value of a Property of a Component at some unpredictable time in DesignMode, and want the previously unknown new value to be embedded in the executable that results from VS compilation (as opposed to serializing it to some external file).
I have resorted to a text edit to swap the new value into the autogenerated Component initialization code in a prebuild event handler. This works fine, but it is a little hacky for my taste. Is there some way instead to force VS to refresh that text?
By luck, I found something that seems to work to force VS to autogenerate initialization code for the runtime instance of a Component, which is what I was after (I needed to have successful communication between designtime and runtime for Components -- easy for Controls, which use the latest designtime BackgroundImage bitmap at runtime (you need only to hide the Property value in the bitmap, which can be done entirely within the rules by using GetPixel and SetPixel). I considered various hacks, but I hit upon the following, which works and makes sense (though I might be completely FoS about the "why". If you know better, please educate me):
As I understand it, soon after a Component is dropped on a design surface in VS (and before it is rendered in the Component Tray), Visual Studio adds it to a collection of Components belonging to a Container. Adding it to the Container's collection is one step in a sequence of happenings that includes Visual Studio's autoregeneration of the Init procedure that will be used for the Component's root at runtime, and which includes values for the Public Properties of the Component. If you overload the Set Site procedure (the creation of ISite is an early step in that sequence) for your Component, and set a value for one of its Public Properties in the Overload, that value will show up in the autoregen text. This is almost what I wanted, except that it only worked when VS called Set Site, and I needed it to happen any time I chose.
Then I took a flyer, and in the UI that sets the Property value in question (at some unknowable time), I added code to remove the Component from the Container's collection and then re-add it, hoping that this might again set off a sequence of happenings that would lead to VS again autoregenerating the Init code, this time with the new value of the Property. It apparently did. Yay.
By deciding when to re-add a Component to the Container's Components collection, I am now able to force VS to write in the autogenerated Init text any value I assign to a Public Property of that Component, and hence embed the value in the executable when it is compiled.
This technique is vulnerable to changes in the (undocumented) way that Microsoft implements autogeneration, and so is arguably a hack. But even documented features are subject to change. Backward-compatibility is a nice idea, but sometimes it has to give way. And delivery is a requirement. It would be great to know that your code will still be good in any future version of VS, but that, sadly, can't happen, hack or no.
Of course, documented features are in general less subject to change than undocumented ones. But the logic of autogeneration after all the initial Property values are set is pretty compelling. That Microsoft uses the same sequence later on is not so inherently logical, but doing it a different way would cost Microsoft money for no apparent gain. And Microsoft and their ilk (are legally required to) make decisions based on the bottom line. So the status quo seems like a good bet.
I am using ChaiJS with my Casper-Chai plugin, and I am not sure how to go about a particular issue I am encountering.
I would like to be able to write tests such as:
expect(casper).selector("#waldo").to.be.visible;
This is pretty straightforward, with a call such as
utils.addChainableMethod(chai.Assertion.prototype, 'selector',
selectorMethod, selectorChainableMethod);
utils.addMethod(chai.Assertion.prototype, 'visible', visibleMethod);
Where the *Method references are to functions that perform respective tests or chainable calls.
My question is what is the best way to have e.g. 'selector' modify descendants in the chain. Two options come to mind:
use utils.flag(chai, 'object') to change it to the selector; or
create a new flag with e.g. utils.flag(chai, 'casper-selector')
When 'visible' is called, it can read the respective flag. Where modifying the 'object' seems to be useful is when calling e.g. 'length' later-on. However, I am concerned somewhat about unexpected side-effects of changing the 'object'.
I may also want to modify the object, for 'length' tests, down the chain, such as:
// there can be only one!
expect(casper).selector("#waldo").length(1)
// but that one has 4 classes
expect(casper).selector("#waldo").class.to.have.length(4)
Grateful for any thoughts and input.
---- EDIT ----
Okay, so here is the conceptual challenge that gave root to Casper-Chai, that requires a bit of a description of what Casper is and why Casper-Chai should be a Chai plugin and not just an alternative to the existing Casper API. Casper is a wrapper around the PhantomJS headless web browser, and as such Casper runs two effectively two distinct virtual machines: A
"Controller" and a headless web browser.
There is no DOM or "document" or "window" object in the controller; the Controller is in this respect a lot like Node.js, albeit using the WebKit javascript parser. In parallel, PhantomJS starts a headless web browser. The Controller can then communicate through a PhantomJS/Casper API to the headless browser. The Controller can tell the headless browser what pages to load, what javascript to run (which is like typing javascript into the console), and even mimic events such as keyboard input and mouse clicks. The headless browser has a full DOM and javascript stack: it is a web-page loaded in WebKit. You can capture screenshots of what WebKit would render.
Casper-Chai is run in the Controller. Tests created in Mocha + Chai in the Controller are meant to evaluate against the state of the headless browser. Although we can copy state from the browser to the Controller and run tests on that copied state, my limited experimentation with that design revealed problems inherent to the design (i.e. efficiency, race conditions, performance, and potential side-effects). The problem is that the browser state is dynamic, complex, and can be unwieldy.
So taking John's example, expect(casper.find("#waldo")).to.be.visible would not work, since there is no DOM, unless the object returned by Casper did some sort of lazy evaluation/mediation. Even if we serialized and copied the HTML element there there is no CSS in the Controller. Then, even if there were CSS for #waldo, there is no way to test the hierarchy as if it were a web-browser. We would have to copy a substantial portion of the DOM and all the CSS and then replicate a web-browser to test whether #waldo is visible - for every single test. Casper-Chai is meant to avoid this problem by running the tests in the browser.
Just for a little extra illumination, a trivial comparison is getting the number of elements that match a selector. One could write expect(casper.evaluate(function () {return __utils__.findAll('.my_class')}).to.have.length(4), where casper.evaluate runs the given function in the headless browser and returns a list of DOM elements matching the selector as strings; and you can think of __utils__ as Casper's version of jQuery. Alternatively, one could write expect(casper).selector('.my_class').to.have.length(4) where selector becomes the 'object' and it has a getter .length that calls 'casper.evaluate(function () { return utils.findAll('.my_class').length`. Only the integer length is returned. For a short number of tests either works fine, but for larger numbers of tests this performance characteristic becomes impactful (here, in this simplistic form, and potentially to a substantially greater extent in more complex cases).
One could of course write expect(casper.evaluate(function () { __utils__.findAll('.my_class').length }).equal(4), but if one is going to write tests like this, why bother with BDD/Chai? It eliminates the benefit of readability that Chai offers.
It is also worth noting that there may be multiple Casper instances in the Controller, corresponding to multiple PhantomJS pages. Think of them as spooky tabs.
So given Domenic's answer that modifying the 'object' flag is the appropriate way to go about it, this seems the most practical way - subject to any thoughts in light of the above description.
I hope the above describes why Casper-Chai should be a plugin and not just an API extension to Casper. I'll also run this by Casper's author to see if he has any input.
It may not be a perfect relationship, but I am hopeful that Casper & Chai can get along handily. :)
The difficulty stems from the fact that casper has a highly procedural API, with methods like Casper#click(String selector) and Casper#fetchText(String selector). To fit naturally with chai, an object-oriented API is needed, e.g. Casper#find(String selector) (returning a CasperSelection object), CasperSelection#click(), CasperSelection#text(), etc.
Therefore I suggest you extend the casper object itself with a find or selector method which returns an object on which you can base your assertions. Then you will not need to change the object flag.
expect(casper.find("#waldo")).to.be.visible;
expect(casper.find("#waldo")).to.have.length(1)
expect(casper.find("#waldo").class).to.have.length(4)
I mostly agree with #John. You're performing your expectations on some other object, so saying
expect(casper).select("#waldo").to.have.length(1)
is very strange. You're not expecting anything about casper, you're expecting something about casper.find("#waldo"). Consider also the should syntax:
casper.should.select("#waldo").have.length(1)
// vs.
casper.find("#waldo").should.have.length(1)
That said, if you're dead set on this kind of API, this is exactly what the object flag is for. Chai even does this, to make assertions like
myObj.should.have.property("foo").that.equals("bar")
work well:
https://github.com/chaijs/chai/blob/49a465517331308695c3d8262cdad42c3ac591ef/lib/chai/core/assertions.js#L773