Where is "tidy" located in "twill"? - module

on "twill" documentation page it is written:
By default, twill will run pages
through tidy before processing
them. This is on by default because
the Python libraries that parse
HTML are very bad at dealing with incorrect HTML, and will often
return incorrect results on "real
world" Web pages. To disable this
feature, set config do_run_tidy 0
But where is this tidy program located inside twill? I have downloaded "twill 0.9" and looked into "twill" folder contents - I just can't find there such a file (or a module) that would be named "tidy"

twill uses the commandline version of tidy if installed on your system. the method that calls tidy to clean your code is located in the utils.py and named 'run_tidy'. its called by the command 'tidy_ok' which is defined in commands.py
if use_tidy is set to true (which it is by default) the _cleanup_html method in ConfigurableParsingFactory calls the run_tidy method

Related

Is the URL not available across feature files anymore?

Is the URL not available across feature files anymore?
As example, in our main feature files we would set the background as such:
Background:
* url url
* header Authorization = token
* def baseUrl = 'care/v1.1/account/'
Where the url is coming from our javascript config file. We have multiple environments we run our Karate suite on, and have config files that point to each of them, therefore the url is unique per environment. Then in the required scenarios there would be a call to a "helper" feature file. Inside that feature file there would be no background and only 1 scenario. That scenario would look like:
* path baseUrl
Given path 'MTYzODJAQDg=/call/add'
That would work fine with Karate 1.20. Now on Karate 1.3.0.RC2 that setup is failing. It's like the url variable is not being shared or something with the helper feature files. The scenarios that call helper feature files will now fail.
I've been able to "fix this", by adding the same url declaration that is in the main feature files, to our helper feature files, essentially so all feature files have it.
My question is, is this now the expected behaviour in the new version.
First, path is not a "variable" and it is NOT designed to work when you call anything.
I have 2 suggestions.
Set your environment-specific config in karate-config.js. That is what it is designed for, and you can very well call a feature to do it. See karate.callSingle().
Use variables to "pass around" information which "called" features can use. Or when you make a call, you can "return" values that can be used by subsequent steps.
I think that trying to use a "call" instead of pre-setting variables is the source of your troubles. If you need to switch the url when you call something, just use a variable.
Adding the url declaration (* url url) to the "helper" feature files, essentially having it in every feature file, will "fix" the "problem".

Xquery extracting property values from .properties file

I am currently trying to extract property values from my properties file, but am running into some problems. I can't test this in ML query console, because the properties file doesn't exist there. I am currently trying to grab the values of the file like this
let $port := #{#properties["ml.properties-name"]}
I've also looked at
xdmp:document-get-properties(
$uri as xs:string,
$property as xs:QName
however that is limited to .xml files I believe. Does anyone have a way/work-around of accessing these values? I can't seem to find one I've looked at some documentation on Marklogic's website, but can't seem to get anything to work. The way I was accessing before was in ruby, through monkey-patching allowing me to access those private fields.The problem with that is the ruby script I call is only called once, while my .xqy file is ran every minute that sends args to another function. I need to access those args from the properties file, right now I just have them hard-coded in. Any thoughts?
Thanks
You cannot access deployment properties like that, but you can pass them along with deployment. If you create a new REST app with latest Roxy, you should get a copy of this config.xqy added to src/config/:
https://github.com/marklogic-community/roxy/blob/master/deploy/sample/custom-config.xqy
That file is treated specially when deployed to the modules database. Properties references are replaced inside there. In your case, add another variable, and give it a string value following the #ml.xyz pattern:
declare variable $c:port := "#ml.property-name";
You can then import the config lib, and use it in your code.
These so-called Deployer Substitutions are described in more detail on the Roxy wiki:
https://github.com/marklogic-community/roxy/wiki/Deployer-Substitutions

ADTF SDK: import manifest AND handle it

I'm trying to run a full ADTF configuration from my own C++ command-line application using the ADTF SDK. ADTF version: 2.9.1 (pretty old).
Here's what I have (want) to do:
Load manifest file
Load globals-xml
Load config-xml
2 & 3 are done, using the session-manager service - see ISessionManager interface: https://support.digitalwerk.net/adtf/v2/adtf_sdk_html_docs/classadtf_1_1_i_session_manager.html , functions LoadGlobalsFromFile & LoadConfigFromFile.
The problem is that I don't know how to do point 1: currently, instead of loading a manifest, I manually load the list of services myself using _runtime->RegisterPlugin, _runtime->CreateInstance and _runtime->RegisterObject.
What I've managed to do is to load only the namespace service and use the INamespace interface which has a method for loading manifest files: https://support.digitalwerk.net/adtf/v2/adtf_sdk_html_docs/classadtf_1_1_i_namespace.html - see ImportFile with ui32ImportFlags = CF_IMPORT_MANIFEST.
But this only loads the manifest settings into the namespace, it doesn't actually instantiate the services. I could do it manually, by:
Do _runtime->RegisterPlugin for every url under
root/plugins/ in the namespace
Do _runtime->CreateInstance for every objectid under
root/services/ in the namespace
But I want this to be more robust and I'm hoping there's already a service that handles the populated namespace subsequently and does these actions. Is there such a service?
Note: if you know how this could be done in ADTF3 that might also be of help for me, so don't hesitate to answer/comment
UPDATE
See "Flow of the system" on this page: https://support.digitalwerk.net/adtf/v2/adtf_sdk_html_docs/page_service_layer.html
Apparently the runtime instance itself handles the manifest file (see run-levels shutdown & kernel) but I don't know how I'm supposed to tell it where it is.
I've tried setting the command-line arguments to be count = 2 and the 2nd = manifest file path when instantiating cRuntime. It doesn't work :).
In ADTF3 you can just use the supplied cADTFSystem class to initiate an ADTF system and then use the ISessionManager interface to load a session of your choice.
Found the answer, not exactly what I expected though. I tried debugging adtf_runtime.exe to find out what arguments it passes to cRuntime.
The result is indeed similar to what I've suspected (and actually tried):
arg1 = adtf_runtime.exe (argv[0] in adtf_runtime)
arg2 = full path to manifest file (e.g. $(ADTF_DIR)\bin\adtf_devenv.manifest)
arg3 = basename of manifest file, without extension (e.g. "adtf_devenv")
While this suggested that cRuntime indeed is responsible with loading and handling the manifest, it turned out to be NOT quite so, passing the same arguments to it did not do the job. The answer came when I noticed that adtf_runtime.exe was actually using an extension of cRuntime called cRuntimeEx which is NOT part of the SDK (at least I haven't found it).
This class IS among the exported symbols of the ADTF SDK library, i.e. a "dumpbin /symbols adtfsdk_290.lib" renders at some point:
public: __cdecl adtf::cRuntimeEx::cRuntimeEx(int,char const * *
const,class ucom::IException * *)
but it is NOT part of the SDK (you won't find a header file defining it).
Among its methods you'll also find this:
protected: long __cdecl adtf::cRuntimeEx::LoadManifest(class adtf_util::cString const &,class std::set,class std::allocator > *,class ucom::IException * *)
Voila. And thus, unfortunately, I cannot achieve what I wanted in a robust fashion. :)
I ended up manually implementing the manifest-loading logic, since cRuntimeEx is not made available within the SDK. Something along these lines:
Use a cDOM instance to load the manifest file
Call FindNodes("/adtf:manifest/environment/variable") to find the environment-variables that need to be set and set them using "cSystem::SetEnvVariable"
Call FindNodes("/adtf:manifest/dependencies/platform") to find library dependencies and use cDynamicLinkage::Load to load the libraries that target the current platform (win32/linux)
Call FindNodes("/adtf:manifest/plugins/plugin") to find the services to be loaded using _runtime->RegisterPlugin (you may also handle "optional" attribute)
Call FindNodes("/adtf:manifest/services/service") to find the services that need to be created using _runtime->CreateInstance and _runtime->RegisterObject (you may also handle "optional" attribute)
And, finally, call FindNodes("/adtf:manifest/manifests/manifest") to (recursively) load child-manifests (you may also handle "optional" attribute)
The only thing you need to do is start the adtf launcher with the meta files (manifest. This works for adtf 2 as well as for adtf 3. It can be done (console) application. If you also want to do a little bit more in adtf 3, you can use adtf control instead of adtf launcher with its scripting interface (see the scripts under examples)

Resource Files in CF - Not Embedded

I have a PPC2003 project in VS2005. I have added a resource file (SomeResources.resx) to the project. I can access the test string I have in the file by using My.Resources.SomeResources.MyTestString (I am using the default Custom Tool Name that VS provides).
When the Build Action property of the is set to Embedded Resource, the application references the MyTestString successfully.
But I do not want to embed the file, so that it's string values can be modified after it has been deployed/installed.
I, therefore, changed the Build Action to Content, so that the file gets copied out to the device for potential future manipulation. When I call MyTestString I get the following error:
MissingManifestResourceException Stack Trace: at System.Resources.ResourceManager.InternalGetResourceSet() at System.Resources.ResourceManager.InternalGetResourceSet() at System.Resources.ResourceManager.InternalGetResourceSet() at System.Resources.ResourceManager.GetString() at MyApp.My.Resources.SomeResources.get_MyTestString() at MyApp.fMain.fMain_Load() at System.Windows.Forms.Form.OnLoad() at System.Windows.Forms.Form._SetVisibleNotify() at System.Windows.Forms.Control.set_Visible() at System.Windows.Forms.Application.Run() at MyApp.fMain.Main()
As the file is not embedded, do I maybe need to manually load it first? If so, how? Any other ideas? Is it not possible to do what I'm after achieving and should I just create my own XML file/reader?
Resources (resx files) are specifically designed to be compiled into the application. If you want it to be an editable content file on the target, then you have to approach it differently and use something like an XML file and wrap that with accessors (akin to the Configuration namespace stuff in the full framework).

Code looking for modules in the wrong place

I have created a multi-layer build using build.dojotoolkit.org (my first attempt) with 3 layers: dojo.js, dojox.js, dijit.js. Each js file is uploaded in its own folder (dojo,dojox,dijit).
When I run the code, I would expect it to look in dijit.js to get the form modules like dijit.form.TextBox. But instead it tries to load dijit/form/TextBox.js and of course ends up with a 404 error.
What am I doing wrong?
The files are here if it helps:
http://usermanagedsolutions.com/Demos/Pages
Manually include each layer in a script tag on the page.
<script src="path/to/dojo.js" />
<script src="path/to/dojox.js" />
<script src="path/to/dijit.js" />
This will make available all modules that you have defined in the build. When you require the text box, Dojo will see that it has the code and will not make the XHR call.
Even though you do not have the intention of using the individual files, you may want to put them on the server as well. This way if someone forgets to add the file to the build, the penalty incurred is an xhr request, as opposed to a javascript error.
Re: AMD
When you include your layers in the manner that I described above, you are not loading all the modules that you included the build - you are just making the define functions available without having to make xhr requests.
If you look at the js file that is output from the build, the file contains a map of the module path to a function that when called will define the module.
So when you write the following code
require(["dijit/form/TextBox"], function(TextBox){
...
});
AMD will first determine if dijit/form/TextBox has already been defined. If so it will just take the object and execute the callback.
If the module hasn't already been defined, then AMD will look in it's cache to see if the define code is available. When you include your script files, you are providing a cache of define functions. AMD finds the code to define the module. It calls this define function and the result is the object that is passed into the callback. Subsequent requires for dijit/form/TextBox will also use this object as described above.
If the module hasn't already been defined and AMD does not find the define function in its cache, then AMD will make an XHR request back to the server to try to locate the specific module code. The result of the XHR call should provide the define function. AMD will call the function and use the result as the object to pass into the callback. Again, subsequent requires for dijit/form/TextBox will also use this object.
The Dojo build, provides the ability to 1) minify the code and 2) combine it into fewer files that need to be requested from the server.
AMD allows you to write code that can run in either environment (using built files or the individual files) without having to make modifications.