How to locally test cross-domain builds? - dojo

Using the dojo toolkit, what is the proper way of locally testing code that will be executed as cross-domain, without making the actual build?
As it appears, there are three possible options (each, with their own drawbacks):
Using local (non xd) XMLHttpRequest dojo.require
This option does not really test the xd behavior, since it dojo.require[s] the js synchronously via XHR.
djConfig.debugAtAllCosts = true;
Although this option does load the required code asynchronously (via the 'script' tag), it also pulls the code in via XHR, parses the dojo.require[s] inside that, and pulls them in. This (using the loader_debug), again, is not what the loader_xd is doing. More info on this topic in a different question.
Creating a cross-domain build
This approach requires a build, which is not possible in the environment which I'm running the code in (We're using our own on-the-fly build process, which includes only the js that is necessary for a particular page. This process is not suitable for development).
Thus, my question: is there a way to use the loader_xd, which does not require an xd build (which adds the xd prefix / suffix to every file)?
The 2nd way (using the debugAtAllCosts) also makes me question the motivation for pre-parsing the dojo.require[s]. If the loader_xd will not (or rather can not) pre-parse, why is the method that was created for testing/debugging doing so?

peller has described the situation. If you wanted to just generate .xd.js file for your modules, you could look at util/buildscripts/jslib/buildUtilXd.js and its buildUtilXd.xdgen() function.
It would take a bit of work to make your own script, but you could look at util/buildscripts/build.js for pointers.
I am hoping in the future for Dojo (maybe Dojo 2.x timeframe) we can switch to a loader that just uses script tags with a module format that has a function wrapper around the module, something that is coded by the developer. This would allow the same module format to work in the local and xd cases.

I don't think there's any way to do XD loading without building and deploying it. Your analysis of the various options seems about right.
debugAtAllCosts is there specifically to solve a debugging problem, where most browsers, until recently, could not do anything intelligent with code brought in through eval. Still today, Firefox will report exception in the console as appearing at the eval site (bootstrap.js) with a line number offset from the eval, rather than from the actual eval buffer, and normally that eval buffer is anonymous. Firebug was the first debugger to jump through some hoops to enhance the debugging experience and permitted special metadata that Dojo's loader injects between the XHR and the eval to determine a filepath to the source. Webkit/Safari have recently implemented this also. I believe debugAtAllCosts pre-dates the XD loader.

Related

Implementing Custom Cache Store

Just implemented a custom cache store based on the official existing RocksDb one for a different backend store.
That leads me to a number of concerns/questions:
Found out the hard way that PersistenceContextInitializerImpl is auto-generated and had added an import from Eclipse to resolve the issue. Now I have to leave it non-imported and showing as an error in Eclipse, is there a best practice way to handle this?
Why is RocksDBDbStoreTest#testSegmentsRemovedAndAdded call when segmented is false, since this calls removeSegments that contractually should not be called if not segmented?
Same class, why is buildConfig numSegments set or larger than 1 for non-segmented test cases?
Any example of store implementing the NonBlockingStore transactional methods? Mostly wondering to make sure that all calls are from the same thread?
Wanted to disable the compatibility test, since not supported in prior versions. Changed group to unstable or manual and would always still get called, which doesn't seem to match documentation. What is the right way to disable it from build time run?
Are there any kind of performance/stress tests for persistence store that can be executed or adapted?
Found out the hard way that PersistenceContextInitializerImpl is auto-generated and had added an import from Eclipse to resolve the issue. Now I have to leave it non-imported and showing as an error in Eclipse, is there a best practice way to handle this?
There should be a way to have it run the annotation processor. We use IntelliJ and it works fine OOTB.
Why is RocksDBDbStoreTest#testSegmentsRemovedAndAdded call when segmented is false, since this calls removeSegments that contractually should not be called if not segmented?
This is just a side effect of having a parameterized test like that. In actual runtime it won't be invoked. You can just have the test ignore it if it is segmented.
if (segmented) return;
Same class, why is buildConfig numSegments set or larger than 1 for non-segmented test cases?
Infinispan data container is always segmented when running any of the cluster modes. The store however is not required to be segmented in those cases. If the store is not segmented you can ignore any segment parameters as documented.
Any example of store implementing the NonBlockingStore transactional methods? Mostly wondering to make sure that all calls are from the same thread?
You can see some at https://github.com/infinispan/infinispan/blob/main/persistence/jdbc/src/main/java/org/infinispan/persistence/jdbc/stringbased/JdbcStringBasedStore.java#L724. The methods can and will be invoked from different metehods, thus why it stores a Map keyed by the Transaction.
Wanted to disable the compatibility test, since not supported in prior versions. Changed group to unstable or manual and would always still get called, which doesn't seem to match documentation. What is the right way to disable it from build time run?
Not sure I understand the question. For your store you shouldn't need any compatibility test, so just don't copy that test file.
Are there any kind of performance/stress tests for persistence store that can be executed or adapted?
We have https://github.com/infinispan/infinispan-benchmarks/blob/main/cachestores/src/main/java/org/infinispan/jmhbenchmarks/InfinispanHolder.java that should work.

how to fix the problem of downloading fasttext-model300?

I'm using windows 10 and python 3.3. I tried to download fasttext_model300 to calculate soft cosine similarity between documents, but when I run my python file, it stops after arriving at this statement:
fasttext_model300 = api.load('fasttext-wiki-news-subwords-300')
There are no errors or not responding, It just stops without any reaction.
Does anybody know why it happens?
Thanks
I'd recommend against using the gensim api.load() functionality. It dynamically runs new, unversioned source code from remote servers – which is opaque in its operations & suboptimal for maintaining a secure local configuration, or debugging any issues which occur.
Instead, find the actual exact data files you trust and download them as plain data. Then, use specific library operations, like the KeyedVectors.load_word2vec_format() method, so instantiate exactly the model you need, using precise local-file paths you understand.
Following those steps may make it clearer what, if anything, is going wrong. If it doesn't, try also enabling logging at the INFO level to gather more information about what progress is made before failure (and add any new details as a comment or to your question).
python3 -m gensim.downloader --download fasttext-wiki-news-subwords-300
Try using this. Source : https://awesomeopensource.com/project/RaRe-Technologies/gensim-data

Get the results of an (existing) code inspection

I am new to writing intellij plugins, so I apologize in advance if my question might be a bit unclear.
I know that (live) code inspections are achieved via Annotators or LocalInspectionTools. I also know there is an API to write a custom Annotator or Inspection tool and I have seen several examples.
What I do not know (my question): is there a manager/helper/"global inspector" that can provide me with the results of an existing code annotator/inspection process (done by the IDE's plugins or by some 3rd party plugin)?
For instance: I do not want to write a custom Lint annotator/inspection plugin for WebStorm. One can configure JSLint/JSHint inside WebStorm settings. The results of the live inspection can be seen over the current file/current open editor.
I would like to get the results of this live inspection, that occurs in the current open editor (inside my own custom code). For this I am interested in the API to get this annotator/inspector and/or the results it provides.
(I apologize for maybe using annotator and inspection terms in a confusing manner)
If there is another question (which I could not find) that duplicates what I have asked above, please re-direct me.
Thank you in advance!
Andrei.
Unfortunately regular annotating process for the linters is asynchronous so you cannot get the annotation results directly (by calling 'Manager' method).
You can create instances of JSLintInspection, JSHintInspection, etc. and call #createVisitor().visit(File) method but the operation is very slow and you must call it outside of AWT thread.
Also you can try to run the method com.intellij.codeInsight.daemon.impl.DaemonCodeAnalyzerEx#processHighlights but as I mentioned above the annotation results for linters can be not available (or outdated)

Access closure property names in the content block at runtime

I want to evaluate my content blocks before running my test suite but the closures' property names is in bytecode already. I'm ooking for the cleanest solution (compared with parsing source manually).
Already tried solution outlined in this post (and I'd still wind up doing some RegEx/parsing) but could only get it to work via script execution engine. It failed in IDE and GroovyConsole. Rather than embedding a Groovy script in project's code, I thought I'd try using Geb's native classes.
Is building on the suggestion about extending Geb Navigators here viable for Geb's PageContentSupport class whose contentTemplates contain a LinkedHashMap of exactly what I need? If yes, would someone provide guidance? If no, any suggestions?
It is currently not possible to get hold of all content elements for a given page/module. Feel free to create an issue for this in Geb's bug tracker, but remember that all that Geb can provide is either a list of content element names or a map from these names to closures that create these elements.
Having that information isn't a generic solution to your problem because it's possible for content elements to take parameters and there are situations where your content elements will be available on the page only after some other actions are performed (for example you have to click on button to reveal a section of a page that uses ajax to retrieve it's content). So I'm afraid that simply going over all elements and checking if they don't throw any errors will not cut it.
I'm still struggling to see what would "evaluating" all content elements prior to running the suite buy you. Are you after verifying that your content elements still work to get a faster feedback than running the whole suite? I'm pretty sure that you won't be able to fully automate detection of content definitions that don't work anymore. In my view it will be more effort than it's worth.

TestCase scripting framework

For our webapp testing environment we're currently using watin with a bunch of unit tests, and we're looking to move to selenium and use more frameworks.
We're currently looking at Selenium2 + Gallio + Xunit.net,
However one of the things we're really looking to get around is compiled testcases. Ideally we want testcases that can be edited in VS with intellisense, but don't require re-compilling the assembly every single time we make a small change,
Are there any frameworks likely to help with this issue?
Are there any nice UI tools to help manage massive ammount of testcases?
Ideally we want the testcase writing process to be simple so that more testers can aid in writing them.
cheers
You can write them in a language like ruby (e.g., IronRuby) or python which doesnt have an explicit compile step of such a manner.
If you're using a compiled a compiled language, it needs to be compiled. Make the assemblies a reasonable size and a quick Shift F6 (I rewire it to shift Ins) will compile your current project. (Shift Ctrl-B will typically do lots of redundant stuff). Then get NUnit to auto-re-run the tests when it detects the assembly change (or go vote on http://xunit.codeplex.com/workitem/8832 and get it into the xunit GUI runner).
You may also find that CR, R# and/or TD.NET have stuff to offer you in speeding up your flow. e.g., I believe CR detects which tests have changed and does stuff around that (at the moment it doesnt support the more advanced xunit.net testing styles so I dont use it day to day)
You wont get around compiling test frameworks if you add new tests..
However there are a few possibilities.
First:
You could develop a native language like i did in xml or similar format. It would look something like this:
[code]
action name="OpenProfile"
parameter name="Username" value="TestUser"
[/code]
After you have this your could simply take an interpreter and serialize this xml into an object. Then with reflection you could call the appropriate function in the corresponding class. After you have a lot of actions implemented of course perfectly moduled and carefully designed structure ( like every page has its own object and a base object that every page inherits from ), you will be able to add xml based tests on your own without the need of rebuilding the framework it self.
You see you have actions like, login, go to profile, go to edit profile, change password, save, check email etcetc. Then you could have tests like: login change password, login edit profile username... and so on and so fort. And you only would be creating new xmls.
You could look for frameworks supporting similar behavior and there are a few out there. The best of them are cucumber and fitnesse. These all support high level test case writing and low level functionality building.
So basically once you have your framework ready all your have to do is writing tests.
Hope that helped.
Gergely.