Ampersand.js admin panel makes it a 2 page app? - singlepage

Ampersand being a single page application framework, would I begin messing things up by creating a separate page that will have just about as equal amount of functionality not even necessarily related to the visitor page (small business application that may wish to have additional management related features)?

Well, as always, it depends. Basically, if your admin pages don't add much overhead to the size of your app, then you can keep it a SPA. The most "heavy" parts are likely to be different libraries that you use. And if those libraries are the same both for admin and for simple users, then your own views and stuff will not really add much (especially if you are minifying and gzipping everything, and you should). But if you use, let's say, a tinymce + full lodash + ... for the admin purposes, while you don't for normal visitors, then possibly you should put it in two separate apps since you don't want your visitors to load extra 300kb.
From the security point of view it should not be a problem since all the requests to your API should be checked server-side. So even if somebody gets access to the admin views they should be unable to get or post anything they don't have rights for.
P.S. As browserifying can take a while, I really recommend you using the watch option if you are not doing it yet, it will really speed up the compilation when you change code

Related

What solutions are there for adjusting text in an app without having to dive into code? [React Native]

First of all this is my 1st stackoverflow post so sorry if I am missing context or do if the question is too out of the ordinary.. Onto some context and example use case for my question.
Context
I'm making a simple app for an uni assignment in React-Native but one of the requirements client we are making it for is that the text (copy) in the app can be adjusted by him after we deliver it to him.
Example Use Case
They want to change the text on the welcome screen from "Welcome, [Name]" to "Hello there, [Name]". But they aren't technical so they expect to be able to change this in a simple UI.
I've tried googling for solutions but keep finding Localization solutions instead and all I've found so far require in-code edits.
Example:
🌍 react-native-localize & Expo Localization Docs
Any help/pointers are much appreciated!
There are two basic ways to solve this issue.
Create an API that the app calls to get the text. Make web or app tools for the client to edit the text in the API. This is usually called a CMS (content-management solution). There are a huge range of options, including building your own.
Pros:
the client can maybe manage their own content without intervention... sometimes
content changes are "instant" - without an app release
Cons:
non-trivial to plan and set up
requires support/maintenance
additional costs for hosting
additional app complexity (need to think about error states, caching, polling?)
app NEEDS to be online to have content
OR
The client submits a ticket for the text changes and a developer makes them.
Pros:
doesn't require the creation/ maintenance of additional tools
app doesn't need to make network requests for content
Cons:
changes take longer and require a new release of the app

How to proliferate access permission to Javascript MVC apps

I recently finished one of my first AgilityJS projects, which is a web-based file browser that lets you create and manage folders and files, and navigate around the folder tree. I followed the various AgilityJS recommendations regarding the design and ended up with all my HTML and Javascript in a single Javascript file.
Now, I would like to provide a "read-only" version of this app which does not have the ability to add/edit/remove files and folders. I'd like to have 2 user types on the website, one type which can only read the files and folders, and another user type who can administer.
My question is, how do I proliferate these permission differences to my AgilityJS app? I know how to secure my endpoints and operations on the server side, but I'm wonder about the best way to do this on the client side. Should I create a separate version of the app with a limited set of functionality? Should I simply hide certain buttons/features? Are there theories, frameworks, etc.? which deal with this issue? Any point in the right direction would be helpful.
LOL - probably one could write books about that topic. Some very basic ideas:
I would start with the philosophical debate according to MVC. There are people argue with the help of MVC that any piece of code and also any piece of data model should never be implemented twice. Business logic and model to the server. The opposite view is focussing on serving users at any cost - even if that means to double maintain code or the model for the sake of avoiding extra round trips. The way in between defines a master source for business code and model and makes sure to follow on other places that leading master (the master will be changed first). Take your choice. Your answer to that question results into boundaries for how the user interface can/have to look like for the user.
You need to think by hard about a permissions concept. Looking at Microsoft I would assume that they invested for all their applications a couple of dozens man years to make up the permission concepts. The ideal permission concept very much depends on your application. So it is close to impossible to work this out without knowing at least a very little of your application. However the permission concept has to come up with policies deciding on roles, groups, access rigths, access levels, context driven permissions (eg. based IP address), permissions black or white listing (permissions each user has at creation). An example from Microsoft: http://office.microsoft.com/en-us/windows-sharepoint-services-help/permission-levels-and-permissions-HA010100149.aspx
Data on the client is not secured!!! Whatever you do on the client, be it data hiding, encryption, compression... - if this is done on the client there are ways to read the data (even by disabling the data manipulation) or by reverting those. Somebody can send data to your server, where the client should not even have given an update form could be implemented by hackers. So as soon as you start to implement permissions make sure, that for all data you send to clients users are permitted to read and that you inlcude permissions checking for each time you add/update data to the database.

Using Magento as the main, and creating a single sign on to integrate with other third party software

This has been something I have been trying to work on for a good long time. It first started with Prestashop as an integration with other scripts or pieces of the puzzle I needed to make for an overall website. I am currently still using Prestashop as my webstore but have since switched to Magento.
I switched to Magento because of it's complex flexibility and because overall I think it is the best solution, best backing and best overall eCommerce script to go with.
That being said, the same issues I was having with Prestashop appear to be the same I will continue to have any in aspect that I try to integrate things together in perfect harmony.
I have Magento setup, as the main portion of the website, and inside Magento in sub folders I have Wordpress installed in a folder called "articles" and I have also went with FluxBB as my message forums because of it's simplicity in not having a crap load of bloated extra features that I could care less about and that is in a sub folder called "forums".
From this point, we know that Magento, Wordpress and FluxBB all have their own way of managing users; creating, managing, and tracking them.
What I am wanting to do is find the best way to fit these three and more together for my website to make the experience for the customer as smooth and as functional as possible. After emailing the ever talented and helpful Alan Storm, he told me the best solution he was aware of working was to make a third party user management that they all point to and it manages the customers authentication. I do believe his thoughts may be the best but I wanted to put this out there here on StackOverFlow and I may post this on Magento as well to get the broad scrope of magento developers and smart guys that like challenges.
I have several thoughts, none may work, some may work half ass, or one may just be something workable. But first let me tell you what I have accomplished so far. I have done the necessary steps to integrate my overall design for the header and footer, so essentially Wordpress and FluxBB are wrapped and are contained inside Magento's outer design layer. So with that being said I have also made it where Magento will check the session to see if the user is logged in to Magento or not by saying "Hello Guest" or "Hello User". This is where I have hit a stopping point because I am out of my depth and would like assistance, whether it is something we create together out of pure challengeness or someone says if I pay them they will help me, either way I would like this accomplished. If and when I get the code figured out whether by means of paying for assistance of a group effort I would like to make it freely available for others to use the concept for their own projects.
Brain Fart #1:
Adjust the user tables for both Wordpress and FluxBB to conform more to the structure of Magento, as for the password and username/email login portion. The rest of the fields can respectively stay as they are for post counts, and etc.
From there, I would like to figure out which class in Magento does the actual input into the database when a customer is created out of registration. When I find that code, I would like to extend upon it the ability to copy the user credentials into the other two tables in the database for Wordpress and FluxBB. If necessary it can just be an added couple of fields to Wordpress and FluxBB if that seems like a better idea and yes I do mean the actual encrypted password that Magento creates, I want this to be secure as well.
From there, when we know that a customer registers with Magento the data is copied over to the other two tables then we at least have made progress, whether this progress will actually work, is still to be determined.
We then disable the login/logout and registration links in any way that we can from Wordpress and FluxBB because they will no longer be needed because we want the user to register, login and logout through one location which is Magento.
Then comes the fun part in my eyes, keep the damn session going throughout the entire website as they order products, review wordpress articles and possibly leave comments, send to friends and etc.... as well as post topics, replies and etc in the FluxBB capacity.
To me this is where the creating the fields or adding the data from Magento's customer registration comes into play, I can make it check to see if they are logged into Magento already and from there we may be able to have it validate itself. This may be over kill or this may just be how it needs to be done. But to me if the credentials are located in all three databases then they should be able to be validated by changing the code in Wordpress and FluxBB or adding code. And Yes I am aware that we will also have to do something about Profile Editing and Password Editing if a customer so desires to change their information.
But that is my first thought on this whether it is the right decision or not, I would like hear from the vast knowledge of people here who have more experience and knowledge than I get with Magento, PHP and everything else.
Brain Fart #2
This illogical idea seems like an outside stretch entirely to me because of the complexity of Magento and how it is overall setup.
But the idea is to remove/edit the Wordpress and FluxBB (and any other third party software) to pretty much ignore it's own method of registration, login, logout, edit and look to Magento for it's credentials and establishing new customers. Essentially making them an oversized module of Magento.
I just know that the way Magento is setup is to be modulerized and its complexity seems like it would take a lot more coding and troubleshooting to do this.
Brain Fart #3
Dump both Wordpress and FluxBB and look towards modules in the Magento Connection Store that pretty much has all of the functionality that I need and can add to them what is missing and not mess with trying to integrate third party software.
I love Wordpress, I think replicating it with a module, at least after the hours I have spent looking at all of the modules available that are CMS/News related is a tough call. FluxBB I could take it or leave it, if someone had an already viable solution to use phpBB or vBulletin or SimpleMachines I would go with them. I rather it be free open source software, not because I am a cheap skate but just because I support open source as much as I can.
Brain Fart #4
Can this be a cookie this, but would only be effective if they allow cookies, or could somehow addon to the session to allow things to pass through but Magento sets up different sessions or allows you too so they things to crash against each other so this may not at all be an idea or may be one as well.
I know I am not giving examples of things I have tried, files I have looked at or anything related to that and I apologize, I provide some links related but nothing specifically found so far that matches what I am trying to accomplish. And I have tried to merge things together with some fun disastrous results.
Link Examples?:
http://www.magentocommerce.com/wiki/doc/webservices-api/api/customer#customer.create
http://www.magentogarden.com/blog/how-are-passwords-encrypted-in-magento.html
http://www.nicksays.co.uk/magento_events_cheat_sheet/
http://www.magentocommerce.com/wiki/5_-_modules_and_development/customers_and_accounts/registration_fields
How to access Magento customer's session from outside Magento?
Any assistance with this would be nice, I am trying to work on several parts of the website at once and this one is troublesome and I would say that everyone is going to find it hard or have found it hard. Anyone like challenges? :)
--------- EDIT:
I have got Magento and Wordpress to work perfectly together with James Kemp's module found on CodeCanyon's website (Single Sign-On for Magento and Wordpress) and I am going to adapt it to work for FluxBB or anything else I do.
Just passing along the information... I see this was edited, don't know what was edited and don't care. Just passing along information I have since found since posting this.
I am managing/customizing a combo of magento+vanilla forums+a custom app made in Yii framework. The users are "shared" between the apps. None of the two links are good. As Alan already replied to you, the correct SSO will be with an external user database/manager. But well, not everyone is up to recoding three apps just to get 1 post a week forum and 1 article a month blog to work with magento. So we are left with less options. First of all, if you don't want (most probably not) to rewrite a good portion of already written open source project that is being updated and maintained and then maintain your changes against periodical updates (you want them), then you have to duplicate the user data over three databases. Unless the project you adapt has some way to manage users data as plugin or external module. AFAIK both of your choice don't.
So, how to implement it? Assuming you choose Magento as mother-of-all, you need it to export an API for authentication, which may work over browser using cookies and javascript but this is rather tricky, or you can use it's frontend cookie to validate the sessions doing server-server API requests from children apps. This is a preferred option as far as "classical" SSO goes. Technically, what should happen when your users open forum or blog, the respective apps detect magento's cookie and check if the session is valid and who is the user. If the user is found, his data is copied to the blog or forum tables. Then you need to start an authenticated session on blog or forum app using the newly created user record.
So far so good, but yet some work. you need to disable the user profiles management in the children apps or modify it so the data held in Magento is always the correct one and you need to invent something to synchronize the Magento's representation of user profile down to the children. This is better to be hooked up on Magento's events so every time a user changes his profile the data is updated in the children app. But there is another but too. You probably want to keep some data app specific, a display name on the forum is not necessary the FirstName+LastName from the Magento and some would like to keep it private.
The above is just what I can recall as interesting facts about keeping it running. There are certainly many other things I've left out, more or less specific. But hopefully my comment can help your brain farting.
We've tried to evaluate other options but anything without duplicate data seems to be too expensive to implement or to maintain. Maybe later. With budget and time.

Separate REST JSON API server and client? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm about to create a bunch of web apps from scratch. (See http://50pop.com/code for overview.) I'd like for them to be able to be accessed from many different clients: front-end websites, smartphone apps, backend webservices, etc. So I really want a JSON REST API for each one.
Also, I prefer working on the back-end, so I daydream of me keeping my focus purely on the API, and hiring someone else to make the front-end UI, whether a website, iPhone, Android, or other app.
Please help me decide which approach I should take:
TOGETHER IN RAILS
Make a very standard Rails web-app. In the controller, do the respond_with switch, to serve either JSON or HTML. The JSON response is then my API.
Pro: Lots of precedent. Great standards & many examples of doing things this way.
Con: Don't necessarily want API to be same as web app. Don't like if/then respond_with switch approach. Mixing two very different things (UI + API).
REST SERVER + JAVASCRIPT-HEAVY CLIENT
Make a JSON-only REST API server. Use Backbone or Ember.js for client-side JavaScript to access API directly, displaying templates in browser.
Pro: I love the separation of API & client. Smart people say this is the way to go. Great in theory. Seems cutting-edge and exciting.
Con: Not much precedent. Not many examples of this done well. Public examples (twitter.com) feel sluggish & are even switching away from this approach.
REST SERVER + SERVER-SIDE HTML CLIENT
Make a JSON-only REST API server. Make a basic HTML website client, that accesses the REST API only. Less client-side JavaScript.
Pro: I love the separation of API & client. But serving plain HTML5 is quite foolproof & not client-intensive.
Con: Not much precedent. Not many examples of this done well. Frameworks don't support this as well. Not sure how to approach it.
Especially looking for advice from experience, not just in-theory.
At Boundless, we've gone deep with option #2 and rolled it out to thousands of students. Our server is a JSON REST API (Scala + MongoDB), and all of our client code is served straight out of CloudFront (ie: www.boundless.com is just an alias for CloudFront).
Pros:
Cutting-edge/exciting
A lot of bang for your buck: API gives you basis for your own web client, mobile clients, 3rd party access, etc.
exceedingly fast site loading / page transitions
Cons:
Not SEO friendly/ready without a lot more work.
Requires top-notch web front-end folk who are ready to cope w/ the reality of a site experience that is 70% javascript and what that means.
I do think this is the future of all web-apps.
Some thoughts for the web front end folks (which is where all the new-ness/challenge is given this architecture):
CoffeeScript. Much easier to produce high-quality code.
Backbone. Great way to organize your logic, and active community.
HAMLC. Haml + CoffeeScript templates => JS.
SASS
We've built a harness for our front-end development called 'Spar' (Single Page App Rocketship) which is effectively the asset pipeline from Rails tuned for single page app development. We'll be open-sourcing within the next couple of weeks on our github page, along with a blog post explaining how to use it and overall architecture in greater detail.
UPDATE:
With respect to people's concerns with Backbone, I think they are over-rated. Backbone is far more an organizational principle than it is a deep framework. Twitter's site itself is a giant beast of Javascript covering every corner-case across millions of users & legacy browsers, while loading tweets real-time, garbage collect, display lots of multimedia, etc. Of all the 'pure' js sites I've seen, Twitter is the odd one out. There have been many impressively complicated apps delivered via JS that fare very well.
And your choice of architecture depends entirely on your goals. If you are looking for the fastest way to support multiple clients and have access to good front-end talent, investing in a standalone API is a great way to go.
Very well asked. +1. For sure, this is future useful reference for me. Also #Aaron and others added value to discussion.
Like Ruby, this question is equally applicable to other programming environments.
I have used the first two options. First one for numerous applications and second one for my open source project Cowoop
Option 1
This one is no doubt the most popular one. But I find implementation are very much http-ish. Every API's initial code goes in dealing with request object. So API code is more than pure ruby/python/other language code.
Option 2
I always loved this.
This option also implies that HTML is not runtime generated on server. This is how option 2 is different from option 3. But are build as static html using a build script. When loaded on client side these HTML would call API server as JS API client.
Separation of concerns is great advantage. And very much to your liking (and mine) backend experts implement backend APIs, test them easily like usual language code without worrying about framework/ http request code.
This really is not as difficult as it sounds on frontend side. Do API calls and resulting data (mostly json) is available to your client side template or MVC.
Less server side processing. It means you may go for commodity hardware/ less expensive server.
Easier to test layers independently, easier to generate API docs.
It does have some downsides.
Many developers find this over engineered and hard to understand. So chances are that architecture may get criticized.
i18n/l10n is hard. Since HTML is essentially generated build time are static, one needs multiple builds per supported language (which isn't necessarily a bad thing). But even with that you may have corner cases around l10n/i18n and need to be careful.
Option 3
Backend coding in this case must be same as second option. Most points for option 2 are applicable here as well.
Web pages are rendered runtime using server side templates. This makes i18n/l10n much easier with more established/accepted techniques. May be one less http call for some essential context needed for page rendering like user, language, currency etc. So server side processing is increased with rendering but possibly compensated by less http calls to API server.
Now that pages are server rendered on server, frontend is now more tied with programming environment. This might not be even a consideration for many applications.
Twitter case
As I understand, Twitter might does their initial page rendering on server but for page updates it still has some API calls and client side templates to manipulate DOM. So in such case you have double templates to maintain which adds some overhead and complexity. Not everyone can afford this option, unlike Twitter.
Our project Stack
I happen to use Python. I use JsonRPC 2.0 instead of REST. I suggest REST, though I like idea of JsonRPC for various reasons. I use below libraries. Somebody considering option 2/3 might find it useful.
API Server: Python A fast web micro framework - Flask
Frontend server: Nginx
Client side MVC: Knockout.js
Other relevant tools/libs:
Jquery
Accounting.js for money currency
Webshim : Cross browser polyfill
director: Client side routing
sphc: HTML generation
My conclusion and recommendation
Option 3!.
All said, I have used option 2 successfully but now leaning towards option 3 for some simplicity. Generating static HTML pages with build script and serving them with one of ultra fast server that specialize in serving static pages is very tempting (Option 2).
We opted for #2 when building gaug.es. I worked on the API (ruby, sinatra, etc.) and my business partner, Steve Smith, worked on the front-end (javascript client).
Pros:
Move quickly in parallel. If I worked ahead of Steve, I could keep creating APIs for new features. If he worked ahead of me, he could fake out the API very easily and build the UI.
API for free. Having open access to the data in your app is quickly becoming a standard feature. If you start with an API from the ground up, you get this for free.
Clean separation. It is better to think of your app as an API with clients. Sure, the first and most important client may be a web one, but it sets you up for easily creating other clients (iPhone, Android).
Cons:
Backwards Compatibility. This is more related to an API than your direct question, but once your API is out there, you can't just break it or you break all your clients two. This doesn't mean you have to move slower, but it does mean you have to often make two things work at once. Adding on to the API or new fields is fine, but changing/removing shouldn't be done without versioning.
I can't think of anymore cons right now.
Conclusion: API + JS client is the way to go if you plan on releasing an API.
P.S. I would also recommend fully documenting your API before releasing it. The process of documenting Gaug.es API really helped us imp
http://get.gaug.es/documentation/api/
I prefer to go the route of #2 and #3. Mainly because #1 violates separation of concerns and intermingles all kinds of stuff. Eventually you'll find the need to have an API end point that does not have a matching HTML page/etc and you'll be up a creek with intermingled HTML and JSON endpoints in the same code base. It turns into a freaking mess, even if its MVP, you'll have to re-write it eventually because its soo messy that its not even worth salvaging.
Going with #2 or #3 allows you to completely have a API that acts the same (for the most part) regardless. This provides great flexibility. I'm not 100% sold on Backbone/ember/whatever/etc.js just yet. I think its great, but as we're seeing with twitter this is not optimal. BUT... Twitter is also a huge beast of a company and has hundreds of millions of users. So any improvement can have a huge impact to bottom line on various areas of various business units. I think there is more to the decision than speed alone and they're not letting us in on that. But thats just my opinion. However, I do not discount backbone and its competitors. These apps are great to use and are very clean and are very responsive (for the most part).
The third option has some valid allure as well. This is where I'd follow the Pareto principle (80/20 rule) and have 20% of your main markup (or vice versa) rendered on the server and then have a nice JS client (backbone/etc) run the rest of it. You may not be communicating 100% with the REST api via the JS client, but you will be doing some work if necessary to make the suer experience better.
I think this is one of those "it depends" kinds of problems and the answer is "it depends" on what you're doing, whom you're serving and what kind of experience you want them to receive. Given that I think you can decide between 2 or 3 or a hybrid of them.
I'm currently working on converting a huge CMS from option 1 to option 3, and it's going well. We chose to render the markup server-side because SEO is a big deal to us, and we want the sites to perform well on mobile phones.
I'm using node.js for the client's back-end and a handful of modules to help me out. I'm somewhat early in the process but the foundation is set and it's a matter of going over the data ensuring it all renders right. Here's what I'm using:
Express for the app's foundation.
(https://github.com/visionmedia/express)
Request to fetch the data.
(https://github.com/mikeal/request)
Underscore templates that get rendered server side. I reuse these on the client.
(https://github.com/documentcloud/underscore)
UTML wraps underscore's templates to make them work with Express.
(https://github.com/mikefrey/utml)
Upfront collects templates and let's you chose which get sent to the client.
(https://github.com/mrDarcyMurphy/upfront)
Express Expose passes the fetched data, some modules, and templates to the front-end.
(https://github.com/visionmedia/express-expose)
Backbone creates models and views on the front-end after swallowing the data that got passed along.
(https://github.com/documentcloud/backbone)
That's the core of the stack. Some other modules I've found helpful:
fleck (https//github.com/trek/fleck)
moment (https//github.com/timrwood/moment)
stylus (https//github.com/LearnBoost/stylus)
smoosh (https//github.com/fat/smoosh)
…though I'm looking into grunt (https//github.com/cowboy/grunt)
console trace (//github.com/LearnBoost/console-trace).
No, I'm not using coffeescript.
This option is working really well for me. The models on the back-end are non-existant because the data we get from the API is well structured and I'm passing it verbatim to the front-end. The only exception is our layout model where I add a single attribute that makes rendering smarter and lighter. I didn't use any fancy model library for that, just a function that adds what I need on initialization and returns itself.
(sorry for the weird links, I'm too much of a n00b for stack overflow to let me post that many)
We use the following variant of #3:
Make a JSON-only REST API server. Make an HTML website server. The HTML web server is not, as in your variant, a client to the REST API server. Instead, the two are peers. Not far below the surface, there is an internal API that provides the functionality that the two servers need.
We're not aware of any precedent, so it's kind of experimental. So far (about to enter beta), it has worked out pretty well.
I'm usually going for the 2nd option, using Rails to build the API, and backbone for the JS stuff. You can even get an admin panel for free using ActiveAdmin.
I've shipped tens of mobile apps with this kind of backend.
However it heavily depends if your app is interactive or not.
I did a presentation on this approach at the last RubyDay.it: http://www.slideshare.net/matteocollina/enter-the-app-era-with-ruby-on-rails-rubyday
For the third option, in order to get responsiveness of the 2nd one, you might want to try pajax as Github does.
I'm about 2 months into a 3 month project which takes the second approach you've outlined here. We use a RESTful API server side with backbone.js on the front. Handlebars.js manages the templates and jQuery handles the AJAX and DOM manipulation. For older browsers and search spiders we've fallen back onto server side rendering, but we're using the same HTML templates as the Handlebars frontend using Mozilla Rhino.
We chose this approach for many different reasons but are very aware it's a little risky given it hasn't been proven on a wide scale yet. All te same, everything's going pretty smoothly so far.
So far we've just been working with one API, but in the next phase of the project we'll be working with a second API. The first is for large amounts of data, and the second acts more like a CMS via an API.
Having these two pieces of the project act completely independent of each other was a key consideration in selecting this infrastructure. If you're looking for an architecture to mashup different independent resources without any dependencies then this is approach is worth a look.
I'm afraid I'm not a Ruby guy so I can't comment on the other approaches. Sometimes it's okay to take a risk. Other times it's better to play it safe. You'll k ow yourself depending on the type of project.
Best of luck with your choice here. Keen to see what others share as well.
I like #3 when my website is not going to be a 100% CRUD implementation of my data. Which is yet to happen.
I prefer sinatra and will just split up the app into a few different rack apps with different purposes. I'll make an API specific rack app that will cover what I need for the API. Then perhaps a user rack app that will present my webpage. Sometimes that version will query the API if needed, but usually it just concerns itself with the html site.
I don't worry about it and just do a persistance layer query from the user side if I need it. I'm not overly concerned with creating a complete separation as they usually end up serving different purposes.
Here is a very simple example of using multiple rack apps. I added a quick jquery example in there for you to see it hitting the API app. You can see how simple it can be with sinatra and mounting multiple rack apps with different purposes.
https://github.com/dusty/multi-rack-app-app
Some great answers here already - I'd definitely recommend #2 or #3 - the separation is good conceptually but also in practice.
It can be hard to predict things like load and traffic patterns on an API and customers we see who serve the API independently have an easier time of provisioning and scaling. If you have to do that munged in with human web access patterns it's less easy. Also your API usage might end up scaling up a lot faster than your web client and then you can see where to direct your efforts.
Between #2 #3 it really depends on your goals - I'd agree that #2 is probably the future of webapps - but maybe you want something more straightforward if that channel is only going to be one of many!
For atyourservice.com.cy we are using server side rendered templates for pages especially to cover the se part. And using the API for interactions after page loads.
Since our framework is MVC all controller functions are duplicated to json output and html output. Templates are clean and receive just an object. This can be transformed to js templates in seconds. We always maintain the serverside templates and just reconvert to js on request.
Isomorphic rendering and progressive enhancement. Which is what I think you were headed for in option three.
isomorphic rendering means using the same template to generate markup server-side as you use in the client side code. Pick a templating language with good server-side and client-side implementations. Create fully baked html for your users and send it down the wire. Use caching too.
progressive enhancement means start doing client side execution and rendering and event listening once you've got all the resources downloaded and you can determine a client capabilities. Falling back to functional client-script-less functionality wherever possible for accessibility and backwards compatibility.
Yes, of course write a standalone json api for this app functionality. But don't go so far that you write a json api for things that work fine as static html documents.
REST server + JavaScript-heavy client was the principle I've followed in my recent work.
REST server was implemented in node.js + Express + MongoDB (very good writing performance) + Mongoose ODM (great for modelling data, validations included) + CoffeeScript (I'd go ES2015 now instead) which worked well for me. Node.js might be relatively young compared to other possible server-side technologies, but it made it possible for me to write solid API with payments integrated.
I've used Ember.js as JavaScript framework and most of the application logic was executed in the browser. I've used SASS (SCSS specifically) for CSS pre-processing.
Ember is mature framework backed by strong community. It is very powerful framework with lots of work being done recently focused on performance, like brand new Glimmer rendering engine (inspired by React).
Ember Core Team is in process of developing FastBoot, which let's you to execute your JavaScript Ember logic on server-side (node.js specifically) and send pre-rendered HTML of your application (which would normally be run in browser) to user. It is great for SEO and user experience as he doesn't wait so long for page to be displayed.
Ember CLI is great tool that helps you to organise your code and it did well to scale with growing codebase. Ember has also it's own addon ecosystem and you can choose from variety of Ember Addons. You can easily grab Bootstrap (in my case) or Foundation and add it to your app.
Not to serve everything via Express, I've chosen to use nginx for serving images and JavaScript-heavy client. Using nginx proxy was helpful in my case:
upstream app_appName.com {
# replace 0.0.0.0 with your IP address and 1000 with your port of node HTTP server
server 0.0.0.0:1000;
keepalive 8;
}
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
client_max_body_size 32M;
access_log /var/log/nginx/appName.access.log;
error_log /var/log/nginx/appName.error.log;
server_name appName.com appName;
location / {
# frontend assets path
root /var/www/html;
index index.html;
# to handle Ember routing
try_files $uri $uri/ /index.html?/$request_uri;
}
location /i/ {
alias /var/i/img/;
}
location /api/v1/ {
proxy_pass http://app_appName.com;
proxy_next_upstream error timeout invalid_header http_500 http_502
http_503 http_504;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Pro: I love the separation of API & client. Smart people say this is
the way to go. Great in theory. Seems cutting-edge and exciting.
I can say it's also great in practice. Another advantage of separating REST API is that you can re-use it later for another applications. In perfect world you should be able to use the same REST API not only for webpage, but also for mobile applications if you'd decide to write one.
Con: Not much precedent. Not many examples of this done well. Public
examples (twitter.com) feel sluggish & are even switching away from
this approach.
Things look different now. There are lots of examples of doing REST API + many clients consuming it.
I decided to go for the architecture of Option #2 for Infiniforms, as it provided a great way to separate the UI from the business logic.
An advantage of this is that the API Servers can scale independently of the web servers. If you have multiple clients, then the websites will not need to scale to the same extent as the web servers, as some client swill be phone / tablet or desktop based.
This approach also gives you a good base for opening up your API to your users, especially if you use your own API to provide all of the functionality for your website.
A very nice question and I'm surprised as I thought this is a very common task nowadays such that I will have plenty of resources for this problem, however turned out not to be true.
My thoughts are as follows:
- Create some module that have the common logic between the API controllers and HTML controllers without returning json or rendering html, and include this module in both HTML controller and API controller, then do whatever you want, so for example:
module WebAndAPICommon
module Products
def index
#products = # do some logic here that will set #products variable
end
end
end
class ProductsController < ApplicationController
# default products controlelr, for rendering HMTL pages
include WebAndAPICommon
def index
super
end
end
module API
class ProductsController
include WebAndAPICommon
def index
super
render json: #products
end
end
end
I've gone for a hybrid approach where we user Sinatra as a base, ActiveRecord / Postgress etc to serve up page routes (slim templates) expose a REST API the web-app can use. In early development stuff like populating select options is done via helpers rendering into the slim template, but as we approach production this gets swapped out for an AJAX call to a REST API as we start to care more about page-load speeds and so forth.
Stuff that's easy to render out in Slim gets handled that way, and stuff (populating forms, receiving form POST data from jQuery.Validation's submitHandler etc, is all abviously AJAX)
Testing is an issue. Right now I'm stumped trying to pass JSON data to a Rack::Test POST test.
I personally prefer option (3) as a solution. It's used in just about all the sites a former (household name) employer of mine has. It means that you can get some front-end devs who know all about Javascript, browser quirks and whatnot to code up your front end. They only need to know "curl xyz and you'll get some json" and off they go.
Meanwhile, your heavy-weight back-end guys can code up the Json providers. These guys don't need to think about presentation at all, and instead worry about flaky backends, timeouts, graceful error handling, database connection pools, threading, and scaling etc.
Option 3 gives you a good, solid three tier architecture. It means the stuff you spit out of the front end is SEO friendly, can be made to work with old or new browsers (and those with JS turned off), and could still be Javascript client-side templating if you want (so you could do things like handle old browsers/googlebot with static HTML, but send JS built dynamic experiences to people using the latest Chrome browser or whatever).
In all the cases I've seen Option 3, it's been a custom implementation of some PHP that isn't especially transferable between projects, let alone out in to Open Source land. I guess more recently PHP may have been replaced with Ruby/Rails, but the same sort of thing is still true.
FWIW, $current_employer could do with Option 3 in a couple of important places. I'm looking for a good Ruby framework in which to build something. I'm sure I can glue together a load of gems, but I'd prefer a single product that broadly provides a templating, 'curling', optional-authentication, optional memcache/nosql connected caching solution. There I'm failing to find anything coherent :-(
Building a JSON API in Rails is first class, The JSONAPI::Resources gem does the heavy lifting for a http://jsonapi.org spec'd API.

Yslow alternatives - Optimisations for small websites

I am developing a small intranet based web application. I have YSlow installed and it suggests I do several things but they don't seem relevant for me.
e.g I do not need a CDN.
My application is slow so I want to reduce the bandwidth of requests.
What rules of YSlow should I adhere to?
Are there alternative tools for smaller sites?
What is the check list I should apply before rolling out my application?
I am using ASP.net.
Bandwidth on intranet sites shouldn't be an issue at all (unless you have VPN users, that is). If you don't and it's still crawling, it's probably something to do with the backend than the front-facing structure.
If you are trying to optimise for remote users, some of the same things apply to try and optimise the whole thing:
Don't use 30 stylesheets - cat them into one
Don't use 30 JS files, cat them into one
Consider compressing both JS and CSS using minifiers or the YUI compressor.
Consider using sprites (images with multiple versions in - eg button-up and button-down, one above the other)
Obviously, massive images are a no-no
Make sure you send expires headers to make sure stylesheets/js/images/etc are all cached for a sensible amount of time.
Make sure your pages aren't ridiculously large. If you're in a controlled environment and you can guarantee JS availability, you might want to page data with AJAX.
To begin,
limit the number of HTTP requests
made for images, scripts and other
resources by combining where
possible. Consider minifying them
too. I would recommend Fiddler for debugging HTTP
Be mindful of the size of Viewstate,
set EnableViewState = false where
possible e.g. For dropdown list controls
that never have their list of items changed,
disable Viewstate and populate in
Page_Init or override OnLoad. TRULY
understanding Viewstate is a
must read article on the subject
Oli has posted an answer while writing this and have to agree that bandwidth considerations should be secondary or tertiary for an intranet application.
I've discovered Page speed since asking this question. Its not really for smaller sites but is another great fire-bug plug-in.
Update: As of June 2015 Page Speed plugins for Firefox and Chrome is no longer maintained and available, instead, Google suggests the web version.
Pingdom tools provides a quick test for any publicly accessible web page.