We have working aurelia based application.
How do we migrate it to use aurelia-cli?
I can speak to this after doing it myself and being involved actively in another project. The project I converted was stamp-web The best way is to take your project
Copy it to a backup folder
Run "au new" and go through the process of creating your target application and setup the values (ES6, Jasmine etc.)
Copy over your src/templates/css piece by piece. I recommend starting with stuff like helpers, utilities etc. (stuff that has very little dependencies). As you go bring in the OSS dependencies into package.json as well as modify the Aurelia.json to reference them correctly in the vendor (or other bundle)
i18n needs some special configuration (so beware of that) - look to their website or my app stamp-web as an example (I have a bunch of configs that work for various OSS libraries)
Keep going as you build up and test your app.
If you look at what I did in stamp-web on the July 16, 2016 commit it will show many of the changes. I also decided to follow the resource convention for generated objects (like elements, value-converters). This is not needed, and I converted VERY early in the process - as you can see by the date.
Having said all that, I am involved in a much larger project (a corporate affair where we have been working on a large web-app for 10+ months with a team) and porting it over has been painful. It is likely 3x bigger than stamp-web, but the troubles we have had there is because we have a central "library" of widgets, services etc. as well as build tooling that was used to build apps. The app conversion was simple, but getting the apps to use the library itself has been a very large challenge. This is largely because while the CLI seems powerful, we have run into some odd behaviors (we have filed bugs as appropriate) as well as general lack of documentation in regards to the Aurelia.json and require JS syntax.
However, when you get it converted your app will load WAY faster than the SystemJS loaded apps..... it is almost.... impressive how quick is loads.
Initial tracing can be a pain on a large app, however updates and watches are seamless.....
Good luck!
Related
First off, I'm brand new to Dojo.
I'm integrating it into our existing web app.
We initially only need the Calendar widget functionality.
I'm looking to keep the number and size of files as small as possible.
I don't believe downloading just the base code file will be sufficent?
http://dojotoolkit.org/download/
Additionally, the Dojo toolkit download is a huge zip (Even if I was to only use compressed files)
Am I left with downloading the toolkit and manually removing everything I don't need?
Is there no custom download builder like jquery ui?
Well, the dojo library is much larger than jquery ui and I don't know of an equivalent to the download builder. If you are just interested in using dojo for a single widget, you might consider exploring a different library.
To use dojox/Calendar, you are still going to need the many dependencies it has on other dojo modules. You can do this manually, but it will be tedious.
One thing you can do is run dojo's build system to package dojox/Calendar and all of its dependencies into a single file. This isn't a trivial task and requires a good understanding of dojo's AMD loader and package system.
If you want to go down this route, I would clone the dojo-boilerplate project on github. It contains everything you need to do this out of the box. Then follow the build system tutorial to understand how you set this up. From there you can have your app depend on dojox/Calendar to produce the file you include on your page to consume it.
I suggest that you put the whole thing (yes, it's a lot of tiny files) to your server.
Dojo 1.9 is written so that when users visit, their computers will only download the individual pieces on an as-needed basis. This is possible because every piece (AMD modules) is explicit about what it needs.
Once you have something that works, you can choose speed-up loading times by using the build system. Basically, this involves going: "If the user wants this thing, they'll probably want all this other stuff, so create a big minified lump and give it to them whenever they start asking." Best of all, it doesn't have to be perfect: If you miss including something, the users browser will still request it a la carte.
At work we're using the Dojo Boilerplate starting application which helps give some initial organization to the build process.
I am converting a good old ASP.Net website to a single page application using Ember.js in a ASP.NET Web API project.
All the devs of my team and myself are pretty new to javascript. We spent the last 2 weeks learning the basis and comparing SPA frameworks. I apologize in advance if my question sounds stupid :)
All the Ember tutorials I have found so far included all Handlebars templates into one single file. I assumed it would be pretty obvious to split them into separates files (*.hbs) when the time would come, but it's not. I might be totally missing something here, but I found about 4 ways to get my templates back when I need them. I'd like to know which method you would recommend:
Concatenate and then inject all the template files when the app loads. I could write some C# code on the server-side that concatenates all the templates files into a single one when the app loads (i.e. each time a visitor enter the app). It seems odd to me, in terms of processing, but also because the generated HTML file will be pretty heavy.
Load each template dynamically via Ajax when I need it. Pretty much what is done here. I kinda like this solution even though I haven't tried it yet. It makes sense to me to get asynchronously a template when I need it instead of loading the entire app on the first load.
Use the Bundling mechanism of Asp.Net MVC. I found stuff like csharp-ember-handlebars to precompile the templates on the server-side and return them as a single javascript file. It works-ish but I feel like the precompiled file will become pretty heavy as I add new templates.
Use Grunt with the plugin grunt-ember-handlebars to precompile the templates. I'm not familiar with Grunt but if I understand well all the devs working on the project will have to install Node.js + Grunt + learn how to use a command prompt + remember to run the command before each commit (if they modified a template). This is not obvious for the web designers. And adding grunt to the build actions will require the entire dev team (working on other projects) to have grunt on their machine (not acceptable).
I need to find a simple and elegant solution to address this issue. My project is in a solution with 35 other projects and I cannot add too much complexity to the build, neither depend on unstable libraries. Maybe I have been too optimistic when I thought I could use Ember for my project. Any suggestion would be welcome!
Your #3 is the most ideal (and common) way that I've seen applications handle templates. With a compiled and minified template file you really don't have to worry to much about performance problems in regards to adding new templates, especially if you take advantage of caching.
One benefit to having the templates compiled and available off-the-bat is that users only need to Download Your Resources Onceā¢, as apposed to downloading resources for each subsequent page load. This leads to a fantastic user experience.
I'm new to objective-c & osx architecture. I started playing with building a framework and then using it. I followed this great tutorial.
During the tutorial, I had to set the framework's target's Dynamic Library Install Name to #rpath/MyFramework.framework/Versions/A/MyFramework. My understanding is that #rpath will expand to the loader's (consumer's) run-path search paths.
It seems as if the responsibility of loading the framework is split between the framework author and the consumer author. Could someone please explain why the author of the framework needs to be concerned with the consumer's run-path search path? For example, if the framework-author set the Dynamic Library Install Name to point to some random directory (instead of #rpath) how would the client be able to consume the framework?
Thanks in advance.
It depends a lot on how the framework is being used. And it's important to remember that the framework construct has existed for a long time on the platform.
For a system framework, such as the ones that Apple creates, you're going to be quite happy that they keep the frameworks in a known location. In those cases, the paths that they use are fixed for the OS, and it guarantees that you don't accidentally load the wrong one. Further, as indicated in the Framework documentation, these frameworks are loaded only once on the machine, regardless of how many times they are used (see Apple:What Are Frameworks) . The benefit here is performance and it is for both the code and the resources in many cases.
Due to the recent move to randomize framework locations,and Apple's comments in the release notes that "Mountain Lion randomly relocates the kernel, kexts, and system frameworks at system boot," it certainly appears they're still sharing these resources, and thus still gaining from this benefit.
For embedded frameworks, the situation is a lot more tedious, and Apple has moved through a variety of methods over the years to make it easier to find frameworks wherever they may be. Due, again, to the shared nature, it would make sense for Applications which share common library requirements to share them on the machine, both for purposes of efficiency, and to make sure they're at the same version if they're sharing data. So, for example, if you have two separate apps that use the same framework to work with shared data, you might put the shared framework in /Library/Frameworks and have both apps explicitly look for that, making sure that some other (possibly older) version of the framework, that has been loaded by another App, is not used instead.
In the end, there's a lot of flexibility for the Framework producer and consumer the way that it currently works. It means that the developer can decide to share a framework, include a private copy of the framework, or even do both, depending upon whether the framework exists on the machine or not. However, the price for that flexibility is the complexity that we have today.
Another example of a reason you might not want to use #rpath specifically is for tightly-linked embedded frameworks (yes, people embed frameworks within other frameworks). In these cases, you don't know where the first framework is loaded, but you want to put the embedded framework inside of it, so that they stay together. In this case #loader_path is relative to the code that is loading it, so that your plug-in's framework can find its resources correctly.
In answer to your specific example about somebody setting the Dynamic Library Install Name
to a "random" location. In this case, you'd have to know that location. There might be many reasons for somebody doing this, such as wanting to discourage reuse by other programs, or because there are large resources within the framework that should only be installed in a known shared location.
The company I work for is building a managed force.com application as an integration with the service we provide.
We are having issues working concurrently on the same set of files due to the shoddy tooling that is provided with the force.com Eclipse plugin. If 2 developers are working on the same file, one is given a message that he can't save -- once he merges he has to manually force the plugin to push his changes to the server along with clicking 2 'Are you really sure' messages.
Basically, the tooling does a shoddy job of merging in changes and forces minutes of work every time the developer wants to save if another person has modified the file he's working on.
We're currently working around this by basically 'locking' individual files by letting co-workers know who is editing a file.
It feels like there has got to be a better way in this day and age. Does anyone know of a different toolset we could use, process we could change, or anything we can do to make this easier?
When working with the Force.com platform my current organisation has found a number of different approaches can work depending on the situation. We all use the Eclipse Force.com plugin without issues and have found the following set ups to work well.
We have a centralised version control system that we deploy from using a series of ant commands to a developer org instance. We then depending on the scope of the work either separate it off into chunks with each developer having their own development org and merging the changes and testing them regularly, or working in a single development org together (which if you have 2 developers should be no major problem) allowing you to have almost instant integration.
If you are both trying to work on the same file you should be pair programming anyway, but if working on two components of a similar system together, sharing the same org can allow you to develop in a fast and flexible manner by creating the skeleton of the system you wish to use and then individually fleshing out the detail.
I have used both methods extensively and a I say, work really well depending on the situation.
Each developer could work in separate development sandbox (if you have enterprise edition, I think 10 sandboxes with full config & limited amount of data are included in the fee?). From time to time you would merge your changes (diff tool from any version control system should be enough) and test them in integration environment. The chain development->integration->system test->Q&A-> production can be useful for other reasons too.
Separate trick to consider can be used if for example 2 guys work on the same trigger. I've learned it on the "DEV 401" course for Developers.
Move all your logic to classes. Seriously. They will be simpler to unit test too.
Add custom field (multi-select picklist) to User object. Values should be equal to each separate feature people are working on. It can hold up to 500 values so you should be safe.
For User account of developer 1 set "feature1" in the picklist. Set "feature2" for the other guy.
In the trigger write an if that tests presence of each picklist value and enters or leaves the call to relevant class. This wastes 1 query but you are sure that only the code you want will be called.
Each developer keeps on working in his own class file.
For integration test of both features simply set the multiselect to contain both features.
I found this trick especially useful when other guy's code turned out to be non-optimal and ate too many resources. I've just disabled his feature on my user account and kept on working.
This trick can be to some extend applied to Visualforce pages too (if you can divide them into components).
If you don't want to waste query - use some logic like "user's first name contains X" ;)
We had/have the exact same problem, we have a team of 10 Devs working on a force.com application that has loads of apex classes (>300) and VF pages (>300).
We started using Eclipse plugin but found it:
too slow working outside of the USA each time a save is called takes > 5 sections
to many merge issues with a team of 10 developers
Next we tried developing in our own individual sandboxes and then merging code. This is ok for a small project but when you have lots of files and need changes to be pushed between sandboxes it becomes impossible to manage as the only thing worse then force.com development tooling, is force.coms deployment/build tools. No automation its all manual. No easy way to move data between sandboxes either.
Our third approach was to just edit all our VF pages and Apex code in the browser. (not using their embedded editor that shows up in the bottom half of the page because that is buggy and slow) but just using the regular Editor under setup > develop > Apex classes. This worked ok. To supplement this we also had a scheduled job that would download all our code and save it into our SVN repository. We also built a tool that allow us to click a folder on our desktop and zip its contents and deploy it as static resources for us.
However this approach still has its short comings, i.e. it is slow and painful to develop in the cloud, their (salesforce) idea of Development As a Service is crazy. Also we have no real SCM we only have it acting as backups.
Bottom line is force.com is a CRM and not a Development platform, if you can? run, flee, get away from it as fast as you can. Using it for anything apart from a CRM is more trouble then it is worth. Even their Slogan "No Software" makes me laugh everytime
I'm not familiar with force.com, but couldn't you use source control and pull all the files down from force.com into your repository. Then you could all do your work, and merge your changes back into the mainline. Then whenever it's necessary push the mainline up to force.com?
Take a look at the "Development Lifecycle Guide: Enterprise Development on the Force.com Platform". You can find it on developer.force.com's documentation page.
You might want to consider working on separate static resources and pages and then just being careful when editing objects, classes etc.. If most of your development happens on client side code (page, staticresource, lightning component/app) you might be interested in this project: https://github.com/bvellacott/salesforce-build . In any case I strongly suggest using version control. If not on a server then at least locally on your machine, in case your peers overwrite your work.
I am rather new to Sitecore and would like to know a bit more about the regular approach to a new project. I'm therefore willing to listen and try out some of the experienced Sitecore developers solutions. I have alot of questions, i won't ask them all. I am just very curious to the approach other people have.
What would be the best approach to start a Sitecore project?
How would you set your project up?
What will be your approach looking at the recycling of code in future projects?
In short: What experiences do YOU have (if you have worked with or are working on Sitecore projects) and how would you recommend other people to work with Sitecore.
Right now we are busy on building Sitecore blocks that we can just recycle in other projects but i know for sure there are 1001 handy tips and tricks out there. I hope we have some Sitecore pro's # stackoverflow that could help a bit.
Here is some general setup info, based on how we do things.
Subversion
This is not Sitecore specific but we set up our repository like this
branches - This is used for working on big updates to the site that may take a while. Say for example I wanted to update how all of the sidebars on the site worked, and this was going to take a few weeks to complete. What we do is create a new branch, and set up another sitecore instance for this dev branch and do what we need to do. When it is complete we merge it back into the trunk for testing and deployment.
tags - This is used for keeping a copy of code that will never be merged back into the trunk (that is the difference between this and branches), so for example when we deploy an update to a site we can create a tag of said code so we can go back to it if necessary.
trunk - The active code, anything checked in here should always be deployable.
The Trunk
This is where we are actively developing/fixing bugs, depending on which part of the project that we are on. We set it up something like this (as an example the project is called TheProject)
We keep our solution file at the root of this folder, this will reference the various libraries in the src folder as well as the web project in the website folder.
docs - A place to put documentation about the site. I strongly suggest that as you complete features/sections you write up a little guide about any special knowledge needed for it to work. So say I am working on a featured content box on a landing page. This box will automatically pull some content unless it is explicitly overridden. What I do when I complete something like this is I write a guide for the customer, using a lot of screenshots. I send the guide to the customer as well as put it in the docs folder. This both helps the customer train their staff, as well as helps new developers come up to speed with how things are done.
lib - This is where we keep any DLLs we are going to need to reference in our projects.
test - A place to put unit tests.
src - This is where we keep our project specific library code. So in here we would have a folder called TheProject.Library, and in there would be the visual studio project for said
web/Website - This is where we have Sitecore installed and is the root of the site. In here we have a project called something along the lines of TheProject.Web. In the project we add all of the general stuff like the web.config/layouts folder and so on.
General Sitecore Code Library
One the best things you can do is from the start setup a general Sitecore library that can be added onto over time. Then when you write any code for a project that is not only applicable to the project, you can add it there. It may seem obvious, but this will really help in the long term. You will end up with much more solid code, see link text .
So when we are done with all this we have something like this as a solution/project structure
TheProject (The solution)
TheProject.Library
TheProject.Web
MyCompany.SitecoreLibrary (our general sitecore library)
Tools
This is another general thing, but I find it can really help speed up Sitecore development. If you find yourself doing something over and over in Sitecore, using API write a tool to do it for you. This not only helps with solving whatever problem you are tackling, but also helps to get you more familiar with the API.
Resharper
This is more of a general .NET development suggestion, use Resharper(http://www.jetbrains.com/resharper/index.html). I am sort of a a Resharper fan boy, it makes so many things with development easier and quicker. In my mind the biggest advantage though is how easy it makes refactoring code, which is really important to do over time to keep things clean and understandable.
I hope some of this helps.
Gabe
This is, as you said, quite a big question. Here are some of my thoughts:
Developing Environment
First of all when I start a new project I install Sitecore on my developing environment and I make sure everything works. Either during installation or after I place the databases on a separate SQL-server and change the connectionstring accordingly.
I open up Visual Studio and create a solution and include the files needed. I create some kind of HelloWorld rendering and try building the solution so that I can verify that everything is working as it should.
When everything is up and running I create a zip-file of the whole solution, including the data-folder. Now it is time to add this to some kind of version control system, in my case Subversion.
I add the zip-file to subversion and also add all files that I think will be changed during the project, usually I tell subversion to ignore the sitecore folder, this speeds up performance drastically when checking in files.
After I perform a commit-action the other team members of my project can check out the code and start developing (after unzipping the zip-file, off-course)
We all work towards the same database although this goes against Sitecore recommendations, we havent had any problems with this approach however items in GUI created/changed by one developer take some time before it is created/changed for all the others.
We could off-course develop several different projects using the same Sitecore installation but since almost all customers use different versions of Sitecore we have found this approach a bit cumbersome.
Often we set up an automated build-server but this is a whole other issue.
Reusable code and renderings
I would like to say that we create neat packages based on the same codebase that gets reused between projects but unfortunately we are not there yet. Today it is a lot of cut and pasting between solutions.
Uploading code to customer
This is done via sitecore packages, normally with some kind of dynamic selection for what files to include, say all ascx-files in a specific folder changed the last 5 days.
There you have it.
Take a look at this series.
Especially the component architecture part have increased our level of reusability.
When you create your Visual Studio's project in Sitecore's web root folder and you will keep all Sitecore's dlls files inside bin directory, don't forget to add to project's references all these files:
bin\ComponentArt.Web.UI.dll
bin\HtmlAgilityPack.dll
bin\ITHit.WebDAV.Server.dll
bin\Lucene.Net.dll
bin\Mvp.Xml.dll
bin\Newtonsoft.Json.dll
bin\RadEditor.Net2.dll
bin\Sitecore.Kernel.dll
bin\Sitecore.Logging.dll
bin\Sitecore.NVelocity.dll
bin\Sitecore.Zip.dll
Because when you CLEAN your project and you will have reference only Sitecore.Kernel.dll (in most of cases), you will lost most of dlls from bin directory!!