Improve StringTemplate 4 performance - stringtemplate

I'm using StringTemplate 4 to internally generate web pages in an Android application.
The templates used in my application server (for web access) are also used to generate pages internally in the Android application, this combined with a json data store mechanism allows the Android application to even run offline in most cases, and using exactly the same application logic and templates from the server.
The problem I'm having now is performance.
A typical server request is processed in approximately 200 ms: request analysis, verification, process, ST4 load and html page generation. This is fine for me, I have some performance improvements pending, but I think it's quite good for now.
However in the Android application the ST4 load takes 1-1,5 seconds. The template structure may have 2 or 3 levels, and the templates have several renderers.
I've done some tests like creating a one level template (stg without imports) but the performance is not improved, so I think has something to do with template parsing and renderer loading.
Is there any way to improve ST4 load and parsing?
Is there any way to store and load a CompiledST object or something similar?
I think the ST4 concept is very good, however if a ST does not change, why loading and parsing the template, and related templates, every time?
I know there is a caching mechanism, however I'm loading different stg for each request so this does not save time in the first loads and caching all stg may eat the device memory.

The reference implementation of StringTemplate does not currently support serializing/deserializing precompiled templates. The C# port of StringTemplate 4 does support this, and the feature is controlled via the TemplateGroup.EnableCache flag.

Related

Performance testing of a Sitecore website

My client gave me a Sitecore website to do some performance testing.I really don't have any expirence with Sitecore website or Sitecore itself (which I am working on now).I have some idea of performance testing of a website and also got additional info from stackflow. But I am curious to know if there is any difference in testing a Sitecore website? What is the best practice to test a Sitecore website? Little bit about the performance testing scope:
The website handles different kind of enrollment path for students. So there are a couple of enrollment paths , all of which ends with a payment done by the customer.There can be more than 1 student enrolling at a time (like 6 together). Performance testing will include enrollments for all of these paths.
A lot of customers trying to enroll at the same time in the same enrollment path and different enrollment paths.
Also have to keep in mind that since this is a customer facing website the images/texts/files that are hosted in Sitecore should be shown in the website quite quickly.
Any help is appreciated.Thanks!
Typically there are three ways to come at perforamce testing for Sitecore.
The first is that it's basically just a web application, so most tools you'd use to test those are valid. Load testing tools like jMeter (or Windows equivalents) that simulate requests to pages and measure response times can give you an idea of how your Sitecore application works under load. Or the developer tools in browsers can show you how long individual requests take, and what resources are being downloaded. Both can help you form a picture of the site's overall performance levels.
The second is that Sitecore includes some tools for measuring how hard Sitecore itself is working to render pages. The "Experience Editor" (the WYSIWYG view for editing web page content) has a "debug" mode which can tell you how many content items are being read to render a page, what UI components are being run and how long these things are taking. This can help you tweak how code queries Sitecore's databases, and how components are cached in order to increase performance.
Thirdly, any ASP.Net application can have low-level performance tracing done with standard .Net tools. Visual Studio's performance tracing tools, or 3rd party stuff like dotTrace can all give you a detailed view of how long IIS spends working on individual pages, and what parts of the code are taking the most time or memory.
I wrote up a user-group presentation I did on this topic a while back:
https://jermdavis.wordpress.com/2017/10/02/measure-if-you-want-to-go-faster/
and more recently I wrote about some general patterns you might see when doing low-level performance traces:
https://jermdavis.wordpress.com/2018/02/05/spotting-common-challenges-when-youre-doing-performance-tracing/
Sitecore is basically a .NET-based content management system so there should not be any difference from other web applications performance testing so the same approach applies.
The best entry-level document I've seen over the web so far is Performance Testing Guidance for Web Applications, you can quickly get familiarized with the concept of load testing, how to implement it, what metrics need to be considered, etc.
With regards to load testing tool, the most natural choice would be Microsoft Visual Studio Load Testing Framework, however it assumes having a relevant license and some C# coding skills. If you don't have any of these you can consider one of free and open source load testing tools.
While creating your script keep in mind that each virtual user needs to represent real user as close as possible so mind cookies, headers, cache, think times, distribution of virtual user groups, etc.

Pack multiple files into one and then split

I'm developing a mobile game and graphics are heavy, so we cannot put all of them into one image atlas, hence having multiple atlases. We use PreloadQueue to load all of the resources. It results in many hits on our server from each client. There is also some additional time delay when we load every file instead of one big 'data' file.
We guess that it could be better if we could pack all of our atlases into one "data" file and load with PreloadQueue at once. Then unpack/split it and use as we use then currently:
pq.getResult('startscreen');
Is there any way to pack all data into one file?
If yes then wouldn't it hit our clients perfomance as unacking operation can take 2 times more memory and some CPU resources.
I would suggest using the following technique outlined on the CreateJS website.
ManifestLoader Class: http://createjs.com/docs/preloadjs/classes/ManifestLoader.html
It allows you to load multiple manifests and use only one preloader. All the load status information can be tracked as well.

cshtml load time issue

I'm investigating the perf issue...
I have log statement just before View("Index",Model) call to capture the time/latency information.
when i compare this log entry timestamp with the IIS log, there is always difference of few seconds.
It means, its taking time during the View.
What are different pointers to debug this issue?
cshtml is not making any server call but just iterating over loop from model data (30 count) and showing 1-2 line of information for each.
This may be caused by having multiple view engines configured for your application. By default, ASP.NET MVC loads both WebFormViewEngine and RazorViewEngine with WebFormViewEngine taking priority. So, ASP first searches your directory structure views for WebFormsViewEngine and if it doesn't find any. then goes on to search with RazorViewEngine. Of course if you aren't using any web forms for views, this is inefficient. You can disable by running this code in your applications bootstrap (e.g. global.asax)
ViewEngines.Engines.Clear();
ViewEngines.Engines.Add(new RazorViewEngine());

iPad - how should I distribute offline web content for use by a UIWebView in application?

I'm building an application that needs to download web content for offline viewing on an iPad. At present I'm loading some web content from the web for test purposes and displaying this with a UIWebView. Implementing that was simple enough. Now I need to make some modifications to support offline content. Eventually that offline content would be downloaded in user selectable bundles.
As I see it I have a number of options but I may have missed some:
Pack content in a ZIP (or other archive) file and unpack the content when it is downloaded to the iPad.
Put the content in a SQLite database. This seems to require some 3rd party libs like FMDB.
Use Core Data. From what I understand this supports a number of storage formats including SQLite.
Use the filesystem and download each required file individually. OK, not really a bundle but maybe this is the best option?
Considerations/Questions:
What are the storage limitations and performance limitations for each of these methods? And is there an overall storage limit per iPad app?
If I'm going to have the user navigate through the downloaded content, what option is easier to code up?
It would seem like spinning up a local web server would be one of the most efficient ways to handle the runtime aspects of displaying the content. Are there any open source examples of this which load from a bundle like options 1-3?
The other side of this is the content creation and it seems like zipping up the content (option 1) is the simplest from this angle. The other options would appear to require creation of tools to support the content creator.
If you have the control over the content, I'd recommend a mix of both the first and the third option. If the content is created by you (like levels, etc) then simply store it on the server, download a zip and store it locally. Use CoreData to store an Index about the things you've downloaded, like the path of the folder it's stored in and it's name/origin/etc, but not the raw data. Databases are not thought to hold massive amounts of raw content, rather to hold structured data. And even if they can -- I'd not do so.
For your considerations:
Disk space is the only limit I know on the iPad. However, databases tend to get slower if they grow too large. If you barely scan though the data, use the file system directly -- may prove faster and cheaper.
The index in CoreData could store all relevant data. You will have very easy and very quick access. Opening a content will load it from the file system, which is quick, cheap and doesn't strain the index.
Why would you do so? Redirect your WebView to a file:// URL will have the same effect, won't it?
Should be answered by now.
If you don't have control then use the same as above but download each file separately, as suggested in option four. after unzipping both cases are basically the same.
Please get back if you have questions.
You could create a xml file for each bundle, containing the path to each file in the bundle, place it in a folder common to each bundle. When downloading, download and parse the xml first and download each ressource one by one. This will spare you the overhead of zipping and unzipping the content. Create a folder for each bundle locally and recreate the folder structure of the bundle there. This way the content will work online and offline without changes.
With a little effort, you could even keep track of file versions by including version numbers in the xml file for each ressource, so if your content has been partially updated only the files with changed version numbers have to be downloaded again.

Website data retrieval

An recent article has prompted me to pick up a project I have been working on for a while. I want to create a web service front end for a number of sites to allow automated completion of forms and data retrieval from the results, and other areas of the site. I have acheived a degree of success using Selenium and custom code however I am looking to extend this to a stage where adding additional sites is a trivial task (maybe one which doesn't require a developer even).
The Kapow web data server looks to achieve a lot of this however I am told it is quite expensive (currently awaiting a quote). Has anyone had experience with this, or can suggest any alternatives (Open Source ideally)?
Disclaimer: I realise the potential legality issues around automating data retrieval from 3rd party websites - this tool is designed to be used in a price comparison system and all of the websites integrated with it will be done with the express permission of the owners. Where the sites provide an API this will clearly be the favoured approach.
Thanks
Realised it's been a while since I posted this, however should anyone come across it, I have had lots of success in using the WSO2 framework (particularly the mashup server) for this. For data mining tasks I have also used a Java library that this wraps - webharvest - which has achieved everything I needed