First off, I'm brand new to Dojo.
I'm integrating it into our existing web app.
We initially only need the Calendar widget functionality.
I'm looking to keep the number and size of files as small as possible.
I don't believe downloading just the base code file will be sufficent?
http://dojotoolkit.org/download/
Additionally, the Dojo toolkit download is a huge zip (Even if I was to only use compressed files)
Am I left with downloading the toolkit and manually removing everything I don't need?
Is there no custom download builder like jquery ui?
Well, the dojo library is much larger than jquery ui and I don't know of an equivalent to the download builder. If you are just interested in using dojo for a single widget, you might consider exploring a different library.
To use dojox/Calendar, you are still going to need the many dependencies it has on other dojo modules. You can do this manually, but it will be tedious.
One thing you can do is run dojo's build system to package dojox/Calendar and all of its dependencies into a single file. This isn't a trivial task and requires a good understanding of dojo's AMD loader and package system.
If you want to go down this route, I would clone the dojo-boilerplate project on github. It contains everything you need to do this out of the box. Then follow the build system tutorial to understand how you set this up. From there you can have your app depend on dojox/Calendar to produce the file you include on your page to consume it.
I suggest that you put the whole thing (yes, it's a lot of tiny files) to your server.
Dojo 1.9 is written so that when users visit, their computers will only download the individual pieces on an as-needed basis. This is possible because every piece (AMD modules) is explicit about what it needs.
Once you have something that works, you can choose speed-up loading times by using the build system. Basically, this involves going: "If the user wants this thing, they'll probably want all this other stuff, so create a big minified lump and give it to them whenever they start asking." Best of all, it doesn't have to be perfect: If you miss including something, the users browser will still request it a la carte.
At work we're using the Dojo Boilerplate starting application which helps give some initial organization to the build process.
Related
We have a legacy PHP/Zend application built with dojo/dijit and dojox/mobile. We want to start to remodel the application using DOJO2. Much of the existing UI is already in the form of dijit AJAX pulls against server code.
We've been through the dojo2 tutorials, as well as become fond of webstorm.
As folks do such migrations, is it most often done as a new, dojo2 application that makes pulls against legacy (existing) server code, or is it more common to point webstorm's 'dist' directory to the existing application's javascript folder and create 'stub' server pages that just include the new DOJO2 code.
Or does it not really matter?
We're just looking to see what is the most common path forward with DOJO2 while minimizing the mucking that goes on with Zend's module routers, and don't want to start down a path that has some unknown 'gotchas'.
Dojo 2 follows a reactive architecture and has a different build path to legacy Dojo. You may find it easier to re-write each php page's TS/JS code at a time to use Dojo 2. I'm afraid I am not familiar with Zend so cannot provide specific advice for this.
We have working aurelia based application.
How do we migrate it to use aurelia-cli?
I can speak to this after doing it myself and being involved actively in another project. The project I converted was stamp-web The best way is to take your project
Copy it to a backup folder
Run "au new" and go through the process of creating your target application and setup the values (ES6, Jasmine etc.)
Copy over your src/templates/css piece by piece. I recommend starting with stuff like helpers, utilities etc. (stuff that has very little dependencies). As you go bring in the OSS dependencies into package.json as well as modify the Aurelia.json to reference them correctly in the vendor (or other bundle)
i18n needs some special configuration (so beware of that) - look to their website or my app stamp-web as an example (I have a bunch of configs that work for various OSS libraries)
Keep going as you build up and test your app.
If you look at what I did in stamp-web on the July 16, 2016 commit it will show many of the changes. I also decided to follow the resource convention for generated objects (like elements, value-converters). This is not needed, and I converted VERY early in the process - as you can see by the date.
Having said all that, I am involved in a much larger project (a corporate affair where we have been working on a large web-app for 10+ months with a team) and porting it over has been painful. It is likely 3x bigger than stamp-web, but the troubles we have had there is because we have a central "library" of widgets, services etc. as well as build tooling that was used to build apps. The app conversion was simple, but getting the apps to use the library itself has been a very large challenge. This is largely because while the CLI seems powerful, we have run into some odd behaviors (we have filed bugs as appropriate) as well as general lack of documentation in regards to the Aurelia.json and require JS syntax.
However, when you get it converted your app will load WAY faster than the SystemJS loaded apps..... it is almost.... impressive how quick is loads.
Initial tracing can be a pain on a large app, however updates and watches are seamless.....
Good luck!
I am converting a good old ASP.Net website to a single page application using Ember.js in a ASP.NET Web API project.
All the devs of my team and myself are pretty new to javascript. We spent the last 2 weeks learning the basis and comparing SPA frameworks. I apologize in advance if my question sounds stupid :)
All the Ember tutorials I have found so far included all Handlebars templates into one single file. I assumed it would be pretty obvious to split them into separates files (*.hbs) when the time would come, but it's not. I might be totally missing something here, but I found about 4 ways to get my templates back when I need them. I'd like to know which method you would recommend:
Concatenate and then inject all the template files when the app loads. I could write some C# code on the server-side that concatenates all the templates files into a single one when the app loads (i.e. each time a visitor enter the app). It seems odd to me, in terms of processing, but also because the generated HTML file will be pretty heavy.
Load each template dynamically via Ajax when I need it. Pretty much what is done here. I kinda like this solution even though I haven't tried it yet. It makes sense to me to get asynchronously a template when I need it instead of loading the entire app on the first load.
Use the Bundling mechanism of Asp.Net MVC. I found stuff like csharp-ember-handlebars to precompile the templates on the server-side and return them as a single javascript file. It works-ish but I feel like the precompiled file will become pretty heavy as I add new templates.
Use Grunt with the plugin grunt-ember-handlebars to precompile the templates. I'm not familiar with Grunt but if I understand well all the devs working on the project will have to install Node.js + Grunt + learn how to use a command prompt + remember to run the command before each commit (if they modified a template). This is not obvious for the web designers. And adding grunt to the build actions will require the entire dev team (working on other projects) to have grunt on their machine (not acceptable).
I need to find a simple and elegant solution to address this issue. My project is in a solution with 35 other projects and I cannot add too much complexity to the build, neither depend on unstable libraries. Maybe I have been too optimistic when I thought I could use Ember for my project. Any suggestion would be welcome!
Your #3 is the most ideal (and common) way that I've seen applications handle templates. With a compiled and minified template file you really don't have to worry to much about performance problems in regards to adding new templates, especially if you take advantage of caching.
One benefit to having the templates compiled and available off-the-bat is that users only need to Download Your Resources Onceā¢, as apposed to downloading resources for each subsequent page load. This leads to a fantastic user experience.
With the tools we have today for assets optimization (for example YUI compressor), how do you automatize it?
For example, I have designed a new website using LESS, so every time I have to edit CSS I have to manually convert them to LESS. The same for Javascript.
So I have to make my PHP project to point to my uncompressed CSS/JS, and when I'm finished, I compress/optimize them, and point my project to the optimized ones again.
I know that there are tools that helps with this (like less.app, which I've used), and that even there are PHP libs that manage all this problem (like Assetic), but I don't like them much. I'm searching for a "programmed" way to deal with optimized assets. Maybe some script that "watches" the uncompressed files or something...
I wish I could have too many alternatives as the Django framework has.
Please, if the question is not well redacted, tell me and we can improve it, so we can establish a good practice for assets :)
I think one efficient solution would be to do this task on the development side, when writting code, and point the code to the optimized files.
One tool that seems to work well is Live Reload (only for OS X, although there is a Windows version on the way).
I like this option as there is no overload on the code to maintain assets.
The company I work for is building a managed force.com application as an integration with the service we provide.
We are having issues working concurrently on the same set of files due to the shoddy tooling that is provided with the force.com Eclipse plugin. If 2 developers are working on the same file, one is given a message that he can't save -- once he merges he has to manually force the plugin to push his changes to the server along with clicking 2 'Are you really sure' messages.
Basically, the tooling does a shoddy job of merging in changes and forces minutes of work every time the developer wants to save if another person has modified the file he's working on.
We're currently working around this by basically 'locking' individual files by letting co-workers know who is editing a file.
It feels like there has got to be a better way in this day and age. Does anyone know of a different toolset we could use, process we could change, or anything we can do to make this easier?
When working with the Force.com platform my current organisation has found a number of different approaches can work depending on the situation. We all use the Eclipse Force.com plugin without issues and have found the following set ups to work well.
We have a centralised version control system that we deploy from using a series of ant commands to a developer org instance. We then depending on the scope of the work either separate it off into chunks with each developer having their own development org and merging the changes and testing them regularly, or working in a single development org together (which if you have 2 developers should be no major problem) allowing you to have almost instant integration.
If you are both trying to work on the same file you should be pair programming anyway, but if working on two components of a similar system together, sharing the same org can allow you to develop in a fast and flexible manner by creating the skeleton of the system you wish to use and then individually fleshing out the detail.
I have used both methods extensively and a I say, work really well depending on the situation.
Each developer could work in separate development sandbox (if you have enterprise edition, I think 10 sandboxes with full config & limited amount of data are included in the fee?). From time to time you would merge your changes (diff tool from any version control system should be enough) and test them in integration environment. The chain development->integration->system test->Q&A-> production can be useful for other reasons too.
Separate trick to consider can be used if for example 2 guys work on the same trigger. I've learned it on the "DEV 401" course for Developers.
Move all your logic to classes. Seriously. They will be simpler to unit test too.
Add custom field (multi-select picklist) to User object. Values should be equal to each separate feature people are working on. It can hold up to 500 values so you should be safe.
For User account of developer 1 set "feature1" in the picklist. Set "feature2" for the other guy.
In the trigger write an if that tests presence of each picklist value and enters or leaves the call to relevant class. This wastes 1 query but you are sure that only the code you want will be called.
Each developer keeps on working in his own class file.
For integration test of both features simply set the multiselect to contain both features.
I found this trick especially useful when other guy's code turned out to be non-optimal and ate too many resources. I've just disabled his feature on my user account and kept on working.
This trick can be to some extend applied to Visualforce pages too (if you can divide them into components).
If you don't want to waste query - use some logic like "user's first name contains X" ;)
We had/have the exact same problem, we have a team of 10 Devs working on a force.com application that has loads of apex classes (>300) and VF pages (>300).
We started using Eclipse plugin but found it:
too slow working outside of the USA each time a save is called takes > 5 sections
to many merge issues with a team of 10 developers
Next we tried developing in our own individual sandboxes and then merging code. This is ok for a small project but when you have lots of files and need changes to be pushed between sandboxes it becomes impossible to manage as the only thing worse then force.com development tooling, is force.coms deployment/build tools. No automation its all manual. No easy way to move data between sandboxes either.
Our third approach was to just edit all our VF pages and Apex code in the browser. (not using their embedded editor that shows up in the bottom half of the page because that is buggy and slow) but just using the regular Editor under setup > develop > Apex classes. This worked ok. To supplement this we also had a scheduled job that would download all our code and save it into our SVN repository. We also built a tool that allow us to click a folder on our desktop and zip its contents and deploy it as static resources for us.
However this approach still has its short comings, i.e. it is slow and painful to develop in the cloud, their (salesforce) idea of Development As a Service is crazy. Also we have no real SCM we only have it acting as backups.
Bottom line is force.com is a CRM and not a Development platform, if you can? run, flee, get away from it as fast as you can. Using it for anything apart from a CRM is more trouble then it is worth. Even their Slogan "No Software" makes me laugh everytime
I'm not familiar with force.com, but couldn't you use source control and pull all the files down from force.com into your repository. Then you could all do your work, and merge your changes back into the mainline. Then whenever it's necessary push the mainline up to force.com?
Take a look at the "Development Lifecycle Guide: Enterprise Development on the Force.com Platform". You can find it on developer.force.com's documentation page.
You might want to consider working on separate static resources and pages and then just being careful when editing objects, classes etc.. If most of your development happens on client side code (page, staticresource, lightning component/app) you might be interested in this project: https://github.com/bvellacott/salesforce-build . In any case I strongly suggest using version control. If not on a server then at least locally on your machine, in case your peers overwrite your work.