I have created a multi-layer build using build.dojotoolkit.org (my first attempt) with 3 layers: dojo.js, dojox.js, dijit.js. Each js file is uploaded in its own folder (dojo,dojox,dijit).
When I run the code, I would expect it to look in dijit.js to get the form modules like dijit.form.TextBox. But instead it tries to load dijit/form/TextBox.js and of course ends up with a 404 error.
What am I doing wrong?
The files are here if it helps:
http://usermanagedsolutions.com/Demos/Pages
Manually include each layer in a script tag on the page.
<script src="path/to/dojo.js" />
<script src="path/to/dojox.js" />
<script src="path/to/dijit.js" />
This will make available all modules that you have defined in the build. When you require the text box, Dojo will see that it has the code and will not make the XHR call.
Even though you do not have the intention of using the individual files, you may want to put them on the server as well. This way if someone forgets to add the file to the build, the penalty incurred is an xhr request, as opposed to a javascript error.
Re: AMD
When you include your layers in the manner that I described above, you are not loading all the modules that you included the build - you are just making the define functions available without having to make xhr requests.
If you look at the js file that is output from the build, the file contains a map of the module path to a function that when called will define the module.
So when you write the following code
require(["dijit/form/TextBox"], function(TextBox){
...
});
AMD will first determine if dijit/form/TextBox has already been defined. If so it will just take the object and execute the callback.
If the module hasn't already been defined, then AMD will look in it's cache to see if the define code is available. When you include your script files, you are providing a cache of define functions. AMD finds the code to define the module. It calls this define function and the result is the object that is passed into the callback. Subsequent requires for dijit/form/TextBox will also use this object as described above.
If the module hasn't already been defined and AMD does not find the define function in its cache, then AMD will make an XHR request back to the server to try to locate the specific module code. The result of the XHR call should provide the define function. AMD will call the function and use the result as the object to pass into the callback. Again, subsequent requires for dijit/form/TextBox will also use this object.
The Dojo build, provides the ability to 1) minify the code and 2) combine it into fewer files that need to be requested from the server.
AMD allows you to write code that can run in either environment (using built files or the individual files) without having to make modifications.
Related
Is the URL not available across feature files anymore?
As example, in our main feature files we would set the background as such:
Background:
* url url
* header Authorization = token
* def baseUrl = 'care/v1.1/account/'
Where the url is coming from our javascript config file. We have multiple environments we run our Karate suite on, and have config files that point to each of them, therefore the url is unique per environment. Then in the required scenarios there would be a call to a "helper" feature file. Inside that feature file there would be no background and only 1 scenario. That scenario would look like:
* path baseUrl
Given path 'MTYzODJAQDg=/call/add'
That would work fine with Karate 1.20. Now on Karate 1.3.0.RC2 that setup is failing. It's like the url variable is not being shared or something with the helper feature files. The scenarios that call helper feature files will now fail.
I've been able to "fix this", by adding the same url declaration that is in the main feature files, to our helper feature files, essentially so all feature files have it.
My question is, is this now the expected behaviour in the new version.
First, path is not a "variable" and it is NOT designed to work when you call anything.
I have 2 suggestions.
Set your environment-specific config in karate-config.js. That is what it is designed for, and you can very well call a feature to do it. See karate.callSingle().
Use variables to "pass around" information which "called" features can use. Or when you make a call, you can "return" values that can be used by subsequent steps.
I think that trying to use a "call" instead of pre-setting variables is the source of your troubles. If you need to switch the url when you call something, just use a variable.
Adding the url declaration (* url url) to the "helper" feature files, essentially having it in every feature file, will "fix" the "problem".
I'm learning about the case for asynchronous module definition (AMD) from here but am not quite clear about the below:
It is tempting to use XMLHttpRequest (XHR) to load the scripts. If XHR
is used, then we can massage the text above -- we can do a regexp to
find require() calls, make sure we load those scripts, then use eval()
or script elements that have their body text set to the text of the
script loaded via XHR.
XHR is using ajax or something to make a call to grab a resource from the database, correct? What does the eval() or script elements have to do with this? An example would be very helpful
That part of RequireJS' documentation is explaining why using XHR rather than doing what RequireJS does is problematic.
XHR is using ajax or something to make a call to grab a resource from the database, correct?
XHR is what allows you to make an Ajax call. jQuery's $.ajax for instance creates an XHR instance for you and uses it to perform the query. How the server responds depends on how the server is designed. Most of the servers I've developed won't use a database to answer a request made to a URL that corresponds to a JavaScript file. The file is just read from the file system and sent back to the client.
What does the eval() or script elements have to do with this?
Once the request is over, what you have is a string that contains JavaScript. You've fetched the code of your module but presumably you also want to execute it. eval is one way to do it but it has the disadvantages mentioned in the documentation. Another way to do it would be to create a script element whose body is the code you've fetched, and then insert this script in the DOM but this also has issues, as explained in the documentation you refer to.
I am trying to write nice unit tests for my already created REST API. I have this simple structure:
ROOT/
config/
handlers/
lib/
models/
router/
main.go
config contains configuration in JSON and one simple config.go that reads and parses JSON file and fills the Config struct. handlers contains controllers (i.e. handlers of respective METHOD+URL described in router/routes.go). lib contains some DB, request responder and logger logic. models contains structs and their funcs to be mapped from-to JSON and DB. Finally router contains the router and routes definition.
Now I was searching and reading a lot about unit testing REST APIs in GO and found more or less satisfying articles about how to set up a testing server, define routes and test my requests. All fine. BUT only if you want to test a single file!
My problem is now how to set up the testing environment (server, routes, DB connection) for all handlers? With the approach found here (which I find very easy to understand and implement) I have one problem: either I have to run tests separately for each handler or I have to write test suites for all handlers in just one test file. I believe you understand that both cases are not very happy (1st because I need to preserve that running go test runs all tests that succeed and 2nd because having one test file to cover all handler funcs would become unmaintainable).
By now I have succeeded (according to the linked article) only if I put all testing and initializing code into just one func per XYZhandler_test.go file but I don't like this approach as well.
What I would like to achieve is kind of setUp() or init() that runs once with first triggered test making all required variables globally visible and initialized so that all next tests could use them already without the need of instantiating them again while making sure that this setup file is compiled only for tests...
I am not sure if this is completely clear or if some code example is required for this kind of question (other than what is already linked in the article but I will add anything that you think is required, just tell me!
Test packages, not files!
Since you're testing handlers/endpoints it would make sense to put all your _test files in either the handlers or the router package. (e.g. one file per endpoint/handler).
Also, don't use init() to setup your tests. The testing package specifies a function with the following signature:
func TestMain(m *testing.M)
The generated test will call TestMain(m) instead of running the tests
directly. TestMain runs in the main goroutine and can do whatever
setup and teardown is necessary around a call to m.Run. It should then
call os.Exit with the result of m.Run
Inside the TestMain function you can do whatever setup you need in order to run your tests. If you have global variables, this is the place to declare and initialize them. You only need to do this once per package, so it makes sense to put the TestMain code in a seperate _test file. For example:
package router
import (
"testing"
"net/http/httptest"
)
var (
testServer *httptest.Server
)
func TestMain(m *testing.M) {
// setup the test server
router := ConfigureRouter()
testServer = httptest.NewServer(router)
// run tests
os.Exit(m.Run())
}
Finally run the tests with go test my/package/router.
Perhaps you could put the setup code that you want to use from multiple unit test files into a separate package that only the unit tests use?
Or you could put the setup code into the normal package and just use it from the unit tests.
It's been asked before but the Go authors have chosen not to implicitly supply a test tag that could be used to selectively enable function compiles within the normal package files.
I'm trying to follow the basic cometd example here: http://dojotoolkit.org/reference-guide/1.7/dojox/cometd.html
It's using the old module loader so I tried the equivalent as follows:
require(["dojo/ready","dojo/io/script","dojox/cometd","dojox/cometd/callbackPollTransport"], function(ready, dontcare, cometd) {
ready(function(){
cometd.init('http://localhost:8080/MyCometD/cometd');
comted.subscribe("/test", function(msg){
console.debug(msg);
});
});
});
This doesn't work and I think it has to do with loading modules - there is some sort of silent error as the code within the ready function does not execute at all. What I found is that when the "dojox/cometd" require statement is present, the code within the ready function does not execute.
Running example: http://jsfiddle.net/Q9W8f/2/
Example with dojox/comted removed: http://jsfiddle.net/mMs2h/4/
I haven't worked with the new module loader that much so I bet I just have some simple misconception.
Help!
It seems like youre correct and that there is a 'wait-loop' for a module requirement that never gets loaded. This may be any of the requirements inside dojox.cometd and you'd need to rewrite the codebase for a fix.
I have had similar issue with the RollingListPane, also in dojox repository - and the developers are saying 'we are 100% AMD compliant with 1.7' however the X in dojox is short for experimental. The developement of dojox modules is not done by the core djtk team and there are still glitches..
Try for starters to avoid using CDN which has performed a >>built macro on every single module. This tends to fail at times whilst using AMD. Instead download the tarball and use a local copy - Not compressed (dojo-release-1.7.2-src)
You can find the hello world example in cometD and ExtJs at following link:
http://jksnu.blogspot.in/2013/08/network-reliability-by-cometd-hellow_16.html
on "twill" documentation page it is written:
By default, twill will run pages
through tidy before processing
them. This is on by default because
the Python libraries that parse
HTML are very bad at dealing with incorrect HTML, and will often
return incorrect results on "real
world" Web pages. To disable this
feature, set config do_run_tidy 0
But where is this tidy program located inside twill? I have downloaded "twill 0.9" and looked into "twill" folder contents - I just can't find there such a file (or a module) that would be named "tidy"
twill uses the commandline version of tidy if installed on your system. the method that calls tidy to clean your code is located in the utils.py and named 'run_tidy'. its called by the command 'tidy_ok' which is defined in commands.py
if use_tidy is set to true (which it is by default) the _cleanup_html method in ConfigurableParsingFactory calls the run_tidy method