Markup Formatting: Server-side or Client-side? - formatting

I'm doing a project right now that might use a markup framework that I'll design specially for it, but which is the best way to do markup on the client-side or server-side for a startup service that might need to format big amounts of text?

If you can do it on the server, do it on the server. This keeps the technological requirements on the client low (you usually do not have any control over the client).

Related

Should a REST API reflect server-side application architecture

I'm in the middle of writing my first web app. Just wondering how the conventions are when it comes to REST API designs. Is it better to have it reflect my server side architecture or whatever seems to be easier to reason about?
I'm thinking of either doing:
/serviceProvider/product
or
/product/serviceProvider
My server side architecture are all separated into modules organized by service providers, however they all expose a product query API.
APIs ideally should be designed to make most sense for its consumer. There isn't really a good reason to reflect your "server architecture" at all. In fact, it's what's usually called a leaky abstraction or a leaky API and is considered bad practice, mainly because your application structure may change and then you have these possible scenarios:
you need to change your API, which is a non-trivial task when it's already being used by someone;
your API stops being reflective of your application structure which leads to inconsistencies;
exposing your application structure or database schema to the world may have security implications.
With these things in mind, you might as well design the API with focus on ease of use in the first place. The consumer of your API doesn't need to know or care about your application architecture.
I believe that keeping on the same architecture is important because you're forced to offer simple API and it will enforce you a simplified architecture on the server side.
That said, of course that you don't want to expose any server side method or even every server side property of the returned objects.
In Kaltura we also believe in flat (not nested) paths to simplify the API.
For more guidelines, see my blog: http://restafar.com/create-new-rest-server/

Binary Serialization vs. use of WCF

I am wondering if there are any performance overhead issues to consider when using WCF vs. Binary Serialization done manually. I am building an n-tier site and wish to implement asynchronous behavior across tiers. I plan on passing data in binary form to lessen bandwidth. WCF seems to be a good shortcut to building your own tools, but I am wondering if there are any points to be aware of when making the choice between use of the WCF vs. System.IO Namespace and building your own light weight library.
There is a binary formatter for WCF, though its not entirely binary; it produces SOAP messages whose content is formatted using the .NET Binary Format for XML, which is a highly compacted form of XML. (Examples of what this looks like are found on this samples page.)
Alternatively, you can implement your own custom message formatter, as long as the formatter was available on both client and server side, to format however you want. (I think you'll still have some overhead from WCF but not much.)
My personal opinion, no amount of overhead savings you might get from defining a custom binary format, and writing all of the serialization/deserialization code to implement it manually, will ever compensate the time and effort you will spend trying to implement and debug such a mechanism.

Web API design tips

I am currently developing a very simple web service and thought I could write an API for that so when I decide to expand it on new platforms I would only have to code the parser application. That said, the API isn't meant for other developers but me, but I won't restrict access to it so anyone can build on that.
Then I thought I could even run the website itself through this API for various reasons like lower bandwidth consumption (HTML generated in browser) and client-side caching. Being AJAX heavy seemed like an even bigger reason to.
The layout looks like this:
Server (database, programming logic)
|
API (handles user reads/writes)
|
Client application (the website, browser extensions, desktop app, mobile apps)
|
Client cache (further reduces server reads)
After the introduction here are my questions:
Is this good use of API
Is it a good idea to run the whole website through the API
What choices for safe authentication do I have, using the API (and for some reason I prefer not to use HTTPS)
EDIT
Additional questions:
Any alternative approaches I haven't considered
What are some potential issues I haven't accounted for that may arise using this approach
First things first.
Asking if a design (or in fact anything) is "good" depends on how you define "goodness". Typical criteria are performance, maintainability, scalability, testability, reusability etc. It would help if you could add some of that context.
Having said that...
Is this good use of API
It's usually a good idea to separate out your business logic from your presentation logic and your data persistence logic. Your design does that, and therefore I'd be happy to call it "good". You might look at a formal design pattern to do this - Model View Controller is probably the current default, esp. for web applications.
Is it a good idea to run the whole website through the API
Well, that depends on the application. It's totally possible to write an application entirely in Javascript/Ajax, but there are browser compatibility issues (esp. for older browsers), and you have to build support for things users commonly expect from web applications, like deep links and search engine friendliness. If you have a well-factored API, you can do some of the page generation on the server, if that makes it easier.
What choices for safe authentication do I have, using the API (and for some reason I prefer not to use HTTPS)
Tricky one - with this kind of app, you have to distinguish between authenticating the user, and authenticating the application. For the former, OpenID or OAuth are probably the dominant solutions; for the latter, have a look at how Google requires you to sign up to use their Maps API.
In most web applications, HTTPS is not used for authentication (proving the current user is who they say they are), but for encryption. The two are related, but by no means equivalent...
Any alternative approaches I haven't considered
Maybe this fits more under question 5 - but in my experience, API design is a rather esoteric skill - it's hard for an API designer to be able to predict exactly what the client of the API is going to need. I would seriously consider writing the application without an API for your first client platform, and factor out the API later - that way, you build only what you need in the first release.
What are some potential issues I haven't accounted for that may arise using this approach
Versioning is a big deal with APIs - once you've created an interface, you can almost never change it, especially with multiple clients that you don't control. I'd build versioning in as a first class concept - with RESTful APIs, you can do this as part of the URL.
Is this good use of API
Depends on what you will do with that application.
Is it a good idea to run the whole website through the API
no, so your site will be accessible only through your application. this way This implementation prevents compatibility with other browsers
What choices for safe authentication do I have, using the API (and for some reason I prefer not to use HTTPS)
You can use omniauth
Any alternative approaches I haven't considered
create both frontends, one in your application and other in common browsers
What are some potential issues I haven't accounted for that may arise using this approach
I don't now your idea, but I can't see major danger.

Is it advisable to use the canonical form in a Silverlight application?

We are developing a LOB application using Silverlight and several team members are advocating the use of the canonical design pattern instead of creating simple WCF services. As the lead, I’m trying to balance best practices with an incredibly tight time line.
Here are the reasons I do NOT think Canonical is a good approach for our project.
We have no immediate (<5 years) requirement to expose any internal services to the enterprise.
Time required for governance. (Developing adapters with data transformation logic, developing XSDs, and developing contracts [fault, data, and operation]).
No need to expose a different data contracts than what exists in the data layer
It doesn’t appear that we can easily use ‘self tracking entities’ with the Canonical approach.
Here are some reasons I’m considering using Canonical approach.
We can use the XSD schemas for data type and length validation.
We will be prepared to allow consumption of our services to the enterprise, whether it’s 5 years or 1 year.
We can feel good that we’re implementing best practices. :)
So, is it advisable to follow the Canonical approach with a Silverlight application? It does not seem that the benefits Canonical provide out weigh the additional work. …or perhaps I’m wrong and it’s not additional work.
I think you should definitely go with WCF RIA services. It's extensible in every point possible, it's fast to develop, it's accessible as regular WCF services, it also has plenty different available end point types, and generally very mature. And implements best practices, and validation process is fully customizable. It really is a no brainer, if you have some additional questions about it shoot away, i'll gladly answer them:)

Does WCF ease implementing latex to pdf conversion web service?

I want to buy a book on WCF because I need to develop latex to pdf conversion web service.
The idea is to let the customer submit latex documents (known as input file with .tex extension) to my web site and download the pdf output generated by the server helped by a service behind the scene.
I am new to WCF, so I have no idea, whether WCF can ease my development.
I am also considering the security issues such as protecting the server from any bad input.
Could you give me a suggestion whether or not WCF will suit my need?
EDIT 1: I am confused with choosing only one answer as both Martin's and Ben's really help me. Using classic probabilistic approach by throwing a coin, the result is Ben's. I am sorry.
WCF will neither help nor hinder your effort overmuch - it's designed to facilitate the actual plumbing of service routing and message handling. The "business logic" (in this case, the latex->pdf conversion) is left to the programmer to implement themselves. For a task with such a simple workflow, WCF would definitely be overkill.
If you had complex authentication requirements, WCF's security features would help you immensely, after a very steep learning curve. I'd recommend sticking with a simple POST or something. Your question about bad input is, again, outside of the scope of WCF. You'll have to take care of that in your business logic.
That said, good luck - sounds like a fun project!
I don't think WCF will help much. For the service itself, I recommend to use form upload (ie. e regular HTML page with a regular form producing a regular POST); this can be done with any web framework, including asp.net.
For the protection against bad input, you'll have to define "bad input": what kind of threat could a Latex file pose? But regardless of the threat - WCF is not designed to help protecting your service from bad input; the security is rather designed to prevent unauthorized users from accessing your service (whether or not they then submit bad input).