How to optimize Dojo loading time? - dojo

I'm working on a business application built upon PHP & Dojo tool kit. The interface is similar that you see on dojo dijit theme tester.
On internet it takes lot of time to load all those js one by one..
I want to know what is the best technique that is being used by theme tester demo that it loads much faster than one we built.?
I'm interested to know the best practices on optimizing its loading time?

You have rightly observed the biggest cause of runtime performance issue is the many many roundtrips it is doing to the server to fetch the small JS files.
While the modularized design of Dojo is very beneficial at design time (widget extensions, namespacing etc), at runtime, it is expected you optimize the dojo bits - the way to do that is to do a custom build.
Doing a custom build will give you a big performance boost - the hundreds of roundtrips will be reduced to one or 2 and the size of the payload will also dramatically decrease. We have seen a 50x performance improvement with custom build
Custom build will create an optimized, minified JS file that will contain only the code you use in the app.
You can define multiple layers depending on how you want to segregate your application JS files (for example, one single compressed file versus multiple files included in different UIs)
depending on the version of dojo you are using, see:
http://dojotoolkit.org/reference-guide/1.7/build/index.html#build-index
http://dojotoolkit.org/reference-guide/1.7/build/pre17/build.html#build-pre17-build
While it looks daunting at first, sitck with it and you will be able to create an optimized version and see the benefits :)

Related

Selenium with keyword + data driven framework is better or Page Object Model

I developed a framework using Keyword (Keywords driven from excel so its data driven as well) and used TestNG features as well. For Locators, I am using Properties file. This works fine for me and able to maintain, add, delete, modify new test cases as well. In addition to this, I am able to Skip a step or Proceed to another step even if a step has failed, stop execution of a test case if a step has failed and move to another test case, selective execution of test case as well, take screenshot as per user requires (not just if test step has failed). I am kind of so convinced with this framework.
My question is why people follow page object model more than these frameworks which work so well and simple to use?
It depends, that what fits best for someone.
As you mentioned that you are kind of convinced with the existing framework because it does most of the stuff that is required from a web automation framework. So you can continue with the same.
Generally a framework should be designed in such a way that is easy to:
understand
adapt
maintain and change
grow
lightweight
for it's creator and users. Obviously not only for it's developer.
Now to answer your question, if existing framework is capable enough of doing mentioned features flawlessly and users of framework are also kind of convinced with this framework and doing automation rapidly with ease. Then you are good.
Data driven - data you need in both case, whether you use properties files or POM. So it comes down to Properties files v/s page object model.
Properties files
Easy to understand (by users of framework)
Reduces code stuff (some automation testers prefer to have less code, not always true)
Framework should provide a way to load files and fetch the locators
What if size of project grows, need more files, loading more file consumes more memory.
Creates confusion if files are not categorized properly
Page Object Model
Might be a bit tricky to understand by users(not always true)
Adds more code stuff
It's flexible (can grow and shrink smoothly)
Consumes less memory as compare to properties files
And yes power of encapsulation
So as a conclusion, there is a trade-off between usability of the framework v\s adaptability of the framework. Understand the size and complexity of the project and users of the framework. You will find out best suitable framework architecture.

Most Efficient Multipage RequireJS and Almond setup

I have multiple pages on a site using RequireJS, and most pages have unique functionality. All of them share a host of common modules (jQuery, Backbone, and more); all of them have their own unique modules, as well. I'm wondering what is the best way to optimize this code using r.js. I see a number of alternatives suggested by different parts of RequireJS's and Almond's documentation and examples -- so I came up with the following list of possibilities I see, and I'm asking which one is most recommended (or if there's another better way):
Optimize a single JS file for the whole site, using Almond, which would load once and then stay cached. The downside of this most simple approach is that I'd be loading onto each page code that the user doesn't need for that page (i.e. modules specific to other pages). For each page, the JS loaded would be bigger than it needs to be.
Optimize a single JS file for each page, which would include both the common and the page-specific modules. That way I could include Almond in each page's file and would only load one JS file on each page -- which would be significantly smaller than a single JS file for the whole site would be. The downside I see, though, is that the common modules wouldn't be cached in the browser, right? For every page the user goes to she'd have to re-download the bulk of jQuery, Backbone, etc. (the common modules), as those libraries would constitute large parts of each unique single-page JS file. (This seems to be the approach of the RequireJS multipage example, except that the example doesn't use Almond.)
Optimize one JS file for common modules, and then another for each specific page. That way the user would cache the common modules' file and, browsing between pages, would only have to load a small page-specific JS file. Within this option I see two ways to finish it off, to include the RequireJS functionality:
a. Load the file require.js before the common modules on all pages, using the data-main syntax or a normal <script> tag -- not using Almond at all. That means each page would have three JS files: require.js, common modules, and page-specific modules.
b. It seems that this gist is suggesting a method for plugging Almond into each optimized file ---- so I wouldn't have to load require.js, but would instead include Almond in both my common modules AND my page-specific modules. Is that right? Is that more efficient than loading require.js upfront?
Thanks for any advice you can offer as to the best way to carry this out.
I think you've answered your own question pretty clearly.
For production, we do - as well as most companies I've worked with option 3.
Here are advantages of solution 3, and why I think you should use it:
It utilizes the most caching, all common functionality is loaded once. Taking the least traffic and generating the fastest loading times when surfing multiple pages. Loading times of multiple pages are important and while the traffic on your side might not be significant compared to other resources you're loading, the clients will really appreciate the faster load times.
It's the most logical, since commonly most files on the site share common functionality.
Here is an interesting advantage for solution 2:
You send the least data to each page. If a lot of your visitors are one time, for example in a landing page - this is your best bet. Loading times can not be overestimated in importance in conversion oriented scenarios.
Are your visitors repeat? some studies suggest that 40% of visitors come with an empty cache.
Other considerations:
If most of your visitors visit a single page - consider option 2. Option 3 is great for sites where the average users visit multiple pages, but if the user visits a single page and that's all he sees - that's your best bet.
If you have a lot of JavaScript. Consider loading some of it to give the user visual indication, and then loading the rest in a deferred way asynchronously (with script tag injection, or directly with require if you're already using it). The threshold for people noticing something is 'clunky' in the UI is normally about 100ms. An example of this is GMail's 'loading...' .
Given that HTTP connections are Keep-Alive by default in HTTP/1.1 or with an additional header in HTTP/1.0 , sending multiple files is less of a problem than it was 5-10 years ago. Make sure you're sending the Keep-Alive header from your server for HTTP/1.0 clients.
Some general advice and reading material:
JavaScript minification is a must, r.js for example does this nicely and your thought process in using it was correct. r.js also combines JavaScript which is a step in the right direction.
As I suggested, defering JavaScript is really important too, and can drastically improve loading times. Defering execution will help your loading time look fast which is very important, a lot more important in some scenarios than actually loading fast.
Anything you can load from a CDN like external resources you should load from a CDN. Some libraries people use today like jQuery are pretty bid (80kb), fetching them from a cache could really benefit you. In your example, I would not load Backbone, underscore and jQuery from your site, rather, I'd load them from a CDN.
I created example repository to demonstrate these 3 kinds of optimization.
It can help us to have better understanding of how to use r.js.
https://github.com/cloudchen/requirejs-bundle-examples
FYI, I prefer to use option 3, following the example in https://github.com/requirejs/example-multipage-shim
I am not sure whether it is the most efficient.
However, I find it convienient because:
Only need to configure the require.config (on the various libraries in one place)
During r.js optimization, then decide which are the modules to group as common
I prefer to use option 3,and i can surely tell you that why is that.
It's the most logical.
It utilizes the most caching, all common functionality is loaded once. Taking the least traffic and generating the fastest loading times when surfing multiple pages. Loading times of multiple pages are important and while the traffic on your side might not be significant compared to other resources you're loading, the clients will really appreciate the faster load times.
I have listed much better options for the same.
You can use any content delivery network (CDN) like MaxCDN to ensure your js files get served to everyone. Also I'll suggest you to put your js files in the footer of your html code. Hope that helps.

How to automate testing of Tridion templates (with TOM.NET)

I have a recurring problem in templating projects. I can't really test my work in any other way than running the templates in Template Builder. This is a major problem if I'm working on a TBB that is used on several different templates because it means that after changing the code in the TBB I should retest all the templates (and probably with several different pages/components as there might be slightly different cases depending on the content).
As you can see in big projects where TBBs are reused a lot changing them costs a lot of time due to the amount of testing necessary and I would be eager to find a solution for this. I know that unit testing is virtually impossible with the current TOM.NET (most classes/methods are internal) so what could be an alternative way to achieve automated testing?
One solution that I have looked into is to use Core Service to initiate rendering process of a template with some test content and then check if the output is as expected but achieving this requires quite a lot of code and thus produces unwanted overhead (I think it still takes less time than manually retesting the cases). Also this doesn't really allow you to test individual TBBs unless you (programmatically) create separate templates with individual (or a subset of) TBBs. The good thing of this solution is that you could run the tests on your local laptop while developing, assuming you can connect to Tridion-server (you'd still have to upload your code to Tridion before running the tests so its not completely ideal solution).
I know that other alternative is to use DD4T/CWA where you can pretty much handle all the testing in the front-end as the templates are (usually) quite simple.
Any other ideas?
I agree that the emphasis is on automated testing rather than unit testing (which, after all, is mostly about object oriented programming). With Tridion work, it's about transforming data. What you need to test data transforms is to have known inputs, and to be able to make assertions about the outputs. I've tried various approaches over the years, but the most effective so far has been the following:
1) For every template, keep test content in a dedicated Folder, and test pages in a dedicated Structure Group. The content is the input to your tests, and isn't intended to change unless the test requirements change.
2) Put the components on the pages. Publish the pages. Keep it simple: you can often have a page for a single test scenario. You can automate publishing the pages if that helps.
3) Use web testing tools to verify the output. This could be HtmlUnit, Selenium or whatever.
Basically - Tridion is an engine for executing transforms. You don't need a specialised test execution engine for this part, although it's useful to use one for testing the output.
Mocking the package sounds attractive, but as Vesa says, it can turn into a huge amount of work. The simple approach I have outlined works in practice, and was proved on a significant project. You could add variations on the theme if you like: one thing I've considered, but never done on a project, is to use the blueprint to give you more isolation. For example, you could test your page templates by localising your component templates to generate static and predictable component presentations. Suffice it to say that there's enough scope for creativity once you unshackle yourself from the baggage of unit testing approaches.
I have some experience with the CoreService scenario. You will just need to write some helpers to upload your templates, create coumpound templates and run it. The tricky part, however, is verification.
You will need to write some test templates that will help you with verification. One way is to write .Net template that you will pass expected values to and it will do the verification. The other way is to write DreamWeaver template that will print values from package and you will then check it against expected. The advantage of this method is that these values will be returned to you as the result of CoreService Render action and you can do all the verification on the client side.
But the most difficult part is the dataset creation. It will probably take most of your time.
You could try to isolate the majority of the code in classes that can be unit tested.
I guess the main problem here is that Engine and Package are sealed, so you cannot easily mock them up. But you can minimize the interaction with those objects and put the meat of your code in classes that take the relevant input and return the output that should be put in the package etc.
I think you could get a lot of coverage of your TBBs just from unit tests with this approach.
At a customer I've seen an implementation where the tests are invoking the same webservice that Template Builder uses, and they use these to execute the templates, evaluate the results, etc.
Probably worth exploring.
I would suggest writing your own TestRunner with 2 goals: Create test data and run tests.
Create test data: The idea is to create a sample dataset (all fields, some fields, and only mandatory fields automatically). (Bonus points for using Chuck Norris quotes instead of lorem ipsum). The title of the Sample content uses a naming scheme - like [TestContent] and/or is in its' own folder with metadata attached (to find it later).
Create test pages: Find the TestContent. Use GetListUsingItems to find pages where the template is used. Copy the page, and paste it into a TestContent StructureGroup, save. Open the page, add the test content, remove the other content, and save page with special naming schema.
Run tests: Find the TestContent, preview each one, write out report with rendering time, success status, and # of chars.
I consider your problem completely technology agnostic regardless of the approach you use (Thinking in the context of Tridion).
The problem is that you are modifying one thing that is used in multiple places (Component/Page Templates) and those places need to be tested before you push
that as a valid change.
Even if you do proper changes, assume the code runs fine and you have a result, maybe is not the result that is expected by other TBBs that consume your
output.
That is the problem itself unfortunately :(
If the problem is that you have to test all the Templates using that TBB, that is still a problem with no solution.
If the problem is that you don't want to impact the current platform with your changes/testing nor interfere with other developments going on
is a different scenario.
I would solve the second one by creating a separate publication inheriting from the publication with valid code/data to test
(or have that created in advance), make your changes there and test.
This approach makes sense if you are using the TBB as part of many Component/Page Templates.
If you have the luxury of the granularity in the front end (your tbb produces an atomic piece of code) the complexity of the scenario would be slightly
reduced, but you still have to test all the scenarios anyway

how can i make my silverlight control load faster

I cached my control ,but there is still a long pause. What could I do to make this go even quicker?
What are some general tips for improving silverlight application loading speed.
If you use Silverlight toolkit themes, remove them from the project and measure the improvement. In our case, we noticed a major improvement in terms of load time performance just doing that. I subsequently copied some of the styles to our own resource files rather than refering the theming toolkit. Apologies if this does not apply in your case.

JSF (and friends) tags vs. traditional html tags

So this question came up today and I didn't have a specific or scientific answer.
What are the costs associated with using jsf (or tomahawk, faclets, etc., etc.) tags in place of traditional html tags. My gut reaction is that you should use jsf tags in situations where you need the additional functionality they provide, and use traditional tags when you don't. Also I feel like jsf tags would require more resources (since the server has to take them and rerender them as html anyways) than html. Does anybody know what the cost actually is (as far as time and memory)? Also useful information is what is the convention that is in use, pure jsf or a mixture of the two?
Sure there is a cost. Whether that is noticeable or negligible depends on the hardware and the load of the server in question. Profile it and upgrade the server if necessary.
You should however realize that on the other hand you save time and cost compared to implementing the same without help of a component based MVC framework. That's going to be a lot of boilerplate code gathering the paramters, doing validations, conversions, updating model values which is possibly not written efficient as compared to existing and widely used MVC frameworks.
The Sun JSF development team puts performance as high priority and Mojarra is already optimized as much as possible.
Our site http://www.skill-guru.com runs on JSF/ Tomahawk / Rich faces on a tomcat server. We do not see any speed issues here.
As Jeff pointed out , it all gets compiled so there is not much noticeable difference until and unless you really use too much rich faces or other fancy stuff.
JSF does help you make your life easy.
A JSF page gets compiled upon first request (or pre-compiled if you specify that in the config). Thus, it's not like the page needs to be parsed every time it's requested. I don't have any specific numbers relating to time/cpu/memory cost, but I'm sure it's negligible.