Understanding the "Backend" [closed] - api

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
What exactly is backend code? Is it just APIs? Is there business logic in the backend?
For instance, if I have a timesheet app and the user enters their time for the day, is the logic to add that information to the database all in the frontend? And the backend is just for exposing an API to add/update information to the database? For this simple timesheet app, generally speaking, what would be in the frontend vs the backend?

This is a very broad question/topic, but I'll try to explain my view on those terms and how to decide where to put what:
In general, the "front-end" is what the user will see. To make this possible, that code has to be "stored and executed" on the users device. I'm quite certain that this means (experienced) users will be able to look into the code of the frontend and might be able to understand how it works. (I'll later explain more about why that is relevant.)
The "back-end" on the other side is usually provided as an "as is" service that somebody is offering, which is reachable via one or multiple of the many available communication protocols and its interface is usually documented in some way, even if it's not public. The most prominent examples nowadays are REST APIs and Graphql APIs.
As you already mention, centralized state management (e.g. storing data) can be an essential part of that (, but it doesn't have to).
When "making a call" to the back-end, some code gets executed on some server and the response (if any) is the only new information the frontend or user get to know.
What goes where?
There are many aspects to consider to make the decision what part of the code goes where. There is no silver bullet: I'm sure examples of all possible combinations can be found.
There can be many different front-ends for a single back-end, based on user preference (e.g. browser based, mobile apps, command line interface, ...).
They can have different release cycles and update mechanisms, so changes to the back-end might need to stay backwards compatible.
For security, operational or data consistency reasons, you might need to implement handling of wrong/invalid input in the back-end, especially if the (kind of) communication protocol changes. Especially since offering a frontend also means that it's possible to know how to call the back-end, so it's also possible to call it differently (be it on purpose or by accident).
Since operations between front-end and back-end are most likely async, certain error handling (like connection issues) can only be handled on front-end side.
Authentication and authorization / secrets management: If you back-end is only an API to the database, it needs to know the right credentials, so they need to be delivered to the user in some way and can be inspected (and potentially "mis"used)
same for business logic: there might be intellectual property or strategic reasons for not delivering it into the hand of users or potential competing companies.
And of course you need to consider how much resources you have available to implement a solution that suites your needs.

Related

DB structure/architecture with a auth SASS

My team and I are considering using an authentication SASS.
I am definitely sure that the SASS solution will eventually be more secure than the hand made one (even using proper libs) and in our case we can afford the money.
But the thing that makes me hesitate the most is how this service will discuss with the rest of my app. Will it actually simplify our code or make it a more complicated knot bag in the end?
I understand that user list with credentials, and eventual attributes are stored there.
But then, what should I store in my app's (SQL) DB?
Say I have users that belong to companies and other assets on a 1 - n relationship.
I like to write things like:
if current_user.company.assets includes current_user.assets do
// logic here
end
Should I:
only store userIds in these tables?
=> But then, I can't user relationships between user attributes and rest of the DB attributes
store some kind of cached data in a so-called sessions table so I can use it as a disposable table of active users?
=> It feels less secure and implies duplicated content which kind of sucks.
making the user object virtual loaded with the auth SASS data and use it there.
=> Slightly better, but I can't make queries over users, even company.users is not available
other proposition?
I'm highly confused on the profits of externalizing what's usually the core object of an app. Am I thinking too monolithically? :-D
Can anyone make suggestions? Even better would be feedback from devs who implemented it.
I found articles on the web, they talk about security and ease of implementation, but don't tackle this question properly.
Cheers, I hope the question doesn't get closed as I'm very curious about the answer.
I'm highly confused on the profits of externalizing what's usually the core object of an app. Am I thinking too monolithically? :-D
Yes, you are thinking too monolithically by assuming that you have to write and control all the code. You have asked a question about essentially outsourcing Authentication to an existing SASS based solution, when you could just as easily write your own. This is a common mistaken assumption that many developers make, especially in the area of Security for applications.
Authentication is a core requirement for many solutions, but it is very rarely a core aspect or feature of the solution.
By writing your own solution to what is a generally standard concept (Authentication) you have to write, test and maintain your logic, including keeping up to date with latest security trends over the lifetime of the product. In terms of direct Profit/Cost:
Costs you a lot of time and effort to get it right
Your own solution will add a layer of technical debt, future developers (internal or external) will need to familiarise themselves with your implementation before they can even start maintenance or improvement work
You are directly assuming all the risks and responsibilities to maintain the security of the solution and its data.
Depending on the type of data and jurisdiction of your application you may be asked down the track to implement multi-factor authentication or to force all users to re-register to adopt stronger security protocols, this can be a lot of effort for your own solution, or a simple tick of a box in the configuration of your Authentication provider.
Business / Data Schema
You need to be careful to separate the two concepts of Authentication and a User in the business domain. Regardless of where or what methodology you use to Authenticate your users, from a data integrity point of view it is important that there is a User concept in the database to associate related data for each user.
So there is no way around it, your business domain logic requires a table to represent a User in this business domain.
This User table should have an arbitrary Primary Key that is specific to the Application domain, and in that table store the token that that is used to map that business user to the Authentication process. Then throughout your model, you can create FK references back to the user table.
In this way it may be possible for you to map users to multiple different providers, or to easily change the provider with minimal or zero impact on the rest of the business domain model.
What is important from a business process point of view is that the application can resolve the correct business User from the token or claims provided in the response from the authentication provider.
Authentication
If SSO (Single Sign On) is appealing to you then the choice of which Authentication provider to use can become an issue depending on the nature of your solution and the type of users who will be Authenticating. If the solution is tenanted to other businesses and offers B2B, or B2C focused activities then an Enterprise authentication solution like Azure AD, or Google Cloud Identity might make sense. You would register your product in the client's authentication domain so that they can manage their users and access levels.
If the solution is more public focussed then you might consider other social media Authentication providers as a means of simplifying Authentication for users rather than forcing them to use your own bespoke Authentication process that they will invariably forget their password too...
You haven't mentioned what language or runtime you are considering, however if you do choose to write your own Authentication service, as a bare minimum you should consider implementing an OAuth 2.0 implementation to ensure that your solution adheres to standard practises and is compatible with other providers chould you choose to use them later.
In a .NET based environment I would suggest Identity Server 4 as a base level of security, there are a lot of resources on implementation, other frameworks should have similar projects or providers that you can host yourself. The point is that by using a standard implementation of your own Authentication Service, rather than writing your own one that is integrated into your software you are not re-inventing anything, there is a lot of commercial and community support available to help you minimise the effort and cost to get things up and running.
Conclusion
Ultimately, if you are concerned with Profit, and lets face it most of us are, then the idea that you would re-create the wheel, just because you can adds a direct implementation and long term maintenance Cost and so will directly reduce Profitability, especially when the effort to implement existing Authentication providers into your solution is really low.
Even if you choose today to implement your own Authentication Service, it would be wise to implement it in such a way that you could easily offload that workload to an external provider, it is the natural evolution of security for small to mid sized applications when users start to demand more stringent security requirements or additional features than it is cost effective to provide in your native runtime.
Once security is implemented in your application the rest of the business process generally evolves and we neglect to come back and review authentication until after a breach, if we or the client ever detect such an event, for this reason it is important that we get security as right as we can from the very start of a solution.
Whilst not directly related, this discussion reminds me of my faviourite quote from Eric Lippert in a comment on an SO blog
Eric Lippert on What senior developers can learn from beginners
...The notion that programming can be principled — that we proceed by understanding the abstractions afforded by the language, and then match those abstractions to a model of the business domain of the program — is apparently never taught to a great many programmers. Rather, many programmers proceed as though they’re exploring an undiscovered country, and going down paths more or less at random and hoping they end up somewhere good, no matter how twisted the path is that gets them there...
One of the reasons that we use external Authentication Providers is that the plethroa of developers who have come before us have already learnt the hard lessons on what to do, or not to do and have evolved a set of standards and protocols to provide best practice guidelines on how to protect our users and their data when they are using our software. Many of these external providers represent best practice implementations and they maintain them for us as the standards continue to evolve, so that we don't have to.

What's the best way to monitor your REST API? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I've created an API based on the RESTful pattern and I was wondering what's the best way to monitor it? Can I somehow gather statistics on each request and how deep could I monitor the requests?
Also, could it be done by using open source software (maybe building my own monitoring service) or do I need to buy third party software?
If it could be achieved by using open source software where do I start?
Start with identifying the core needs that you think monitoring will solve. Try to answer the two questions "What do I want to know?" and "How do I want to act on that information?".
Examples of "What do I want to know?"
Performance over time
Largest API users
Most commonly used API features
Error occurrence in the API
Examples of "How do I want to act on that information?"
Review a dashboard of known measurements
Be alerted when something changes beyond expected bounds
Trace execution that led to that state
Review measurements for the entire lifetime of the system
If you can answer those questions, you can either find the right third party solution that captures the metrics that you're interested in, or inject monitoring probes into the right section of your API that will tell you what you need to do know. I noticed that you're primarily a Laravel user, so it's likely that many of the metrics you want to know can be captured by adding before ( Registering Before Filters On a Controller ) and after ( Registering an After Application Filter ) filters with your application, to measure time for response and successful completion of response. This is where the answers to the first set of questions are most important ( "What do I want to know?" ), as it will guide where and what you measure in your app.
Once you know where you can capture the data, selecting the right tool becomes a matter of choosing between (roughly) two classes of monitoring applications: highly specialized monitoring apps that are tightly bound to the operation of your application, and generalized monitoring software that is more akin to a time series database.
There are no popular (to my knowledge) examples of the highly specialized case that are open source. Many commercial solutions do exist however: NewRelic, Ruxit, DynaTrace, etc. etc. etc. Their function could easily be described to be similar to a remote profiler, with many other functions besides. (Also, don't forget that a more traditional profiler may be useful for collecting some of the information you need - while it definitely will not supplant monitoring your application, there's a lot of valuable information that can be gleaned from profiling even before you go to production.)
On the general side of things, there are many more open source options that I'm personally aware of. The longest lived is Graphite (a great intro to which may be read here: Measure Anything, Measure Everything), which is in pretty common use amongst many. Graphite is by far from the only option however, and you can find many other options like Kibana and InfluxDB should you wish to host yourself.
Many of these open source options also have hosted options available from several providers. Additionally, you'll find that there are many entirely commercial options available in this camp (I'm founder of one, in fact :) - Instrumental ).
Most of these commercial options exist because application owners have found it pretty onerous to run their own monitoring infrastructure on top of running their actual application; maintaining availability of yet another distributed system is not high on many ops personnel's wishlists. :)
(I'm clearly biased for answering this since I co-founded Runscope which I believe is the leader in API Monitoring, so you can take this all with a grain of salt or trust my years of experience working with 1000s of customers specifically on this problem :)
I don't know of any OSS tools specific to REST(ful) API monitoring. General purpose OSS metrics monitoring tools (like Graphite) can definitely help keep tabs on pieces of your API stack, but don't have any API-specific features.
Commercial metrics monitoring tools (like Datadog) or Application Performance Monitoring (APM) tools like (New Relic or AppDynamics) have a few more features specific to API use cases, but none are centered on it. These are a useful part of what we call a "layered monitoring approach": start with high-level API monitoring, and use these other tools (exception trackers, APM, raw logs) to dive into issues when they arise.
So, what API-specific features should you be looking for in an API monitoring tool? We categorize them based on the three factors that you're generally monitoring for: uptime/availability, performance/speed and correctness/data validation.
Uptime Monitoring
At a base level you'll want to know if you're APIs are even available to the clients that need to reach them. For "public" (meaning, available on the public internet, not necessarily publicized...a mobile backend API is public but not necessarily publicized) APIs you'll want to simulate the clients that are calling them as much as possible. If you have a mobile app, it's likely the API needs to be available around the world. So at a bare minimum, your API monitoring tool should allow you to run tests from multiple locations. If your API can't be reached from a location, you'll want notifications via email, Slack, etc.
If your API is on a private network (corporate firewall, staging environment, local machine, etc.) you'll want to be able to "see" it as well. There are a variety of approaches for this (agents, VPNs, etc.) just make sure you use one your IT department signs off on.
Global distribution of testing agents is an expensive setup if you're self-hosting, building in-house or using an OSS tool. You need to make sure each remote location you set up (preferably outside your main cluster) is highly-available and fully-monitored as well. This can get expensive and time-consuming very quickly.
Performance Monitoring
Once you've verified your APIs are accessible, then you'll want to start measuring how fast they are performing to make sure they're not slowing down the apps that consume them. Raw response times is the bare minimum metric you should be tracking, but not always the most useful. Consider cases where multiple API calls are aggregated into a view for the user, or actions by the user generate dynamic or rarely called data that may not be present in a caching layer yet. These multi-step tasks or workflows are can be difficult to monitor with APM or metrics-based tools as they don't have the capabilities to understand the content of the API calls, only their existence.
Externally monitoring for speed is also important to get the most accurate representation of performance. If the monitoring agent sits inside your code or on the same server, it's unlikely it's taking into account all the factors that an actual client experiences when making a call. Things like DNS resolution, SSL negotiation, load balancing, caching, etc.
Correctness and Data Validation
What good is an API that's up and fast if it's returning the wrong data? This scenario is very common and is ultimately a far worse user experience. People understand "down"...they don't understand why an app is showing them the wrong data. A good API monitoring tool will allow you to do deep inspection of the message payloads going back and forth. JSON and XML parsing, complex assertions, schema validation, data extractions, dynamic variables, multi-step monitors and more are required to fully validate the data being sent back and forth is Correct.
It's also important to validate how clients authenticate with your API. Good API-specific monitoring tools will understand OAuth, mutual authentication with client certificates, token authentication, etc.
Hopefully this gives you a sense of why API monitoring is different from "traditional" metrics, APM and logging tools, and how they can all play together to get a complete picture of your application is performing.
I am using runscope.com for my company. If you want something free apicombo.com also can do.
Basically you can create a test for your API endpoint to validate the payload, response time, status code, etc. Then you can schedule the test to run. They also provide some basic statistics.
I've tried several applications and methods to do that, and the best (for my company and our related projects) is to log key=value pairs (atomic entries with all the information associated with this operation like IP source, operation result, elapsed time, etc... on specific log files for each node/server) and then monitorize with Splunk. With your REST and json data maybe your aproach will be different, but it's also well supported.
It's pretty easy to install and setup. You can monitor (almost) real time data (responses times, operation results), send notifications on events and do some DWH (and many other things, there are lots of plugins).
It's not open source but you can try it for free if you use less than 50MB logs per day (that's how it worked some time ago, since now I'm on enterprise license I'm not 100% sure).
Here is a little tutorial explaining how to achieve what you are looking for: http://blogs.splunk.com/2013/06/18/getting-data-from-your-rest-apis-into-splunk/

How to guarantee the continued functionality of my website with external APIs e.g Facebook Graph [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
With web APIs like Facebook constantly being updated and changed, any given change may take me a while to implement in terms of working out what has changed, and then updating it/correcting it.
If during this time a user cannot logon for example, they are not going to put their trust in my website.
So, how do you manage your dependencies on other services to ensure that if it works now it will work forever?
Well, frankly what you are asking for is that Facebook freeze their entire API and Graph implementation. This simply will not happen. In an ever changing technological world things are going to change. We as developers for a 3rd party platform are responsible for keeping up to date with any changes that 3rd party platform makes.
If you want to keep track of all the changes that Facebook are making to their API's then you should take a look at the Developers Roadmap. They list here all the changes that are being planned.
For serious changes, those what would essentially "break" the current functionality, Facebook guarantees at least a 90 day notification, before the changes are made.
Taken from the Facebook developers roadmap -
In the spirit of openness and transparency and to adhere to our
Breaking Change Policy, we publish this roadmap to help
developers plan for changes that may require code modifications. Like
all roadmaps, it may shift slightly, but we will share insight into
what is happening as details become available. We encourage developers
to subscribe to our blog, where we announce rollout plans and
timing.
Given all of this information, there are still things you can do to make sure your users always have access to your site. One thing would be to provide an alternative login method. It's really useful to be able to hitch a ride on Facebook authentication and seamlessly integrate your site's login with theirs, but what happens if one day Facebook (for some reason) goes down? That would mean that your users would be locked out of your site too! Consider offering an alternative if it is applicable to your situation.
Communication is key.
Most respectable web API providers (like Facebook Graph) have a developers blog, as well as mailing lists that provide feedback regarding upcoming changes to the API. Chances are, they don't want to break your app as much as you don't want it to break. So read the blogs, and/or subscribe to the mailing lists. And take note about upcoming "breaking changes" (as they're called.)
Additionally, it is up to heed their warnings and suggestions. If they say that a particular API call is deprecated, then there is a good chance that it won't be there after the next update. If a call you're making is deprecated, then make early strides to find an alternative method.

A Software Design Issue: The Router Class [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
In subsystem design, I sometimes see software designs that have one high-level class that has only one feature: It routes a call from a client using the class to another a certain class the client would like to use. However, it alone does not have any functionality. Take this scenario:
Say there are five classes in the bowling alley subsystem: An alley, a lane, a bowler, control desk, and a score. Anytime a client outside the subsystem wants any data to display to a user, it would communicate only to the control desk (the router) that would call any of the classes it holds to get the client's requested data (a score for example: Client calls control desk with getScore(), which calls a Lane's getScore(), which calls a Bowler's getScore()).
I understand this is a bad design decision, but I'd like to hear real-world examples with consequences you discovered of having this router class (Can also be known as a "middleman"). What issues did you run into as the system you were working on evolved? What arguments would you make to persuade software designers to avoid router classes?
I'd argue that in some designs a router is the preferred design pattern, such as in MVC frameworks to delegate handlers for URLs. In that situation it's really helpful because it provides a very clean separation between what the client "sees" and the actual logic behind it.
Anytime a client outside the subsystem wants any data to display to a user, it would communicate only to the control desk (the router) that would call any of the classes it holds to get the client's requested data
this sounds like the Facade pattern
As for the middleman, in the following example, wouldnt the Lane be the culprit?
a score for example: Client calls control desk with getScore(), which calls a Lane's getScore(), which calls a Bowler's getScore())
simplifying the interface to a subsystem for the benefit of clients outside the subsystem could be considered good design.
The Facade pattern, and the Mediator pattern perform similar tasks to what you are describing. Your use of the Middleman moniker implies the Mediator pattern over the Facade pattern, as a Middleman is responsible for negotiating between two entities with neither entity needing to know the specifics of how to communicate with the other.
You can use either of these patterns to reduce coupling for the client class, which needs to use the system the Mediator or Facade is masking. In the case of the Facade pattern, the intention is to provide a convenient way to interface a system of classes. For the Mediator pattern, the purpose is to abstract the steps required to perform a complex task from the client.
I don't know that routing method calls is always such a bad idea.
It seems that you'd just have the problem associated with any additional layer of abstraction - that the abstraction can break, or that it's one more thing that can potentially misbehave if there's a change made to something underlying.
I've never seen anything that called more than a few layers deep, but I just imagine that adding extra calls would make it more difficult to trace the path information takes, and make troubleshooting more difficult.
One potential problem, though, is if each layer implements its own error handling or retry process, making something that's insignificant at each level overwhelming as a whole. For example, if the Lane makes two attempts to check the bowler's score, and the desk makes 3 attempts to check the score, then a failure of the bowler to return a score will result in 6 queries being made. Add a 30 second timeout at the bowler, and you're suddenly waiting for 3 minutes for what should take 30 seconds.
OldNewThing had an article about an example of this in the Windows OS, and the problems it caused, but now I can't seem to find it.
I think that both ASP.NET MVC and MVP patterns utilize this type of concept where you end up with something simply handling the logic that is executed from one end to the other or on behalf of a lower layer to a higher layer. This certainly makes testing more easy to perform so that in and of itself is a MAJOR benefit. This type of pattern does create some manual or tedeious work in that you could click a button and have it do a task rather than click the button, have something intercept that click, then call into some service managing class that does some work. But on the front of keeping your code clean and readable the more separation there is often times the better.
If you are not a tester or could care less about patterns directly then think of it in another format. You have a link that takes a user to a page. This link is scattered across your site all over the place as the destination is very important or used a lot. The destination changes. This could be a find and replace operation...or you could insert a RedirectService (call it what you will) that when someone clicks a link takes change and directs the clicker to the right location. This allows the location to be defined once in one location and therefore changed once. Find and replace often times changes things that weren't meant too be changed!
No matter how you look at this...separation of concerns is a good think. The UI is one concern. The controller of activities is another concern. The activity itself is yet another concern!

What are the best resources if you wanted to create an application with modularization? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
In my analysis of the newer web platforms/applications, such as Drupal, Wordpress, and Salesforce, many of them create their software based on the concept of modularization: Where developers can create new extensions and applications without needing to change code in the "core" system maintained by the lead developers. In particular, I know Drupal uses a "hook" system, but I don't know much about the engine or design that implements it.
If you were to go down a path of creating an application and you wanted a system that allowed for modularization, where do you start? Is this a particular design pattern that everyone knows about? Is there a handbook that this paradigm tends to subscribe to? Are their any websites that discuss this type of development from the ground-up?
I know some people point directly to OOP, but that doesn't seem to be the same thing, entirely.
This particular system I'm planning leans more towards something like Salesforce, but it is not a CRM system.
For the sake of the question, please ignore the Buy vs. Build argument, as that consideration is already in the works. Right now, I'm researching the build aspect.
There are two ways to go around here, which one to take depends on how will your software behave.
One way is the plugin route, where people can install new code into the application modifying the relevant aspects. This route demands your application is installable and not only offered as a service (or else that you install and review code sent in by third parties, a nightmare).
The other way is to offer an API, which can be called by the relevant parties and make the application transfer control to code located elsewhere (a la Facebook apps) or make the application to do as the API commands enable the developer (a la Google Maps).
Even though the mechanisms vary and how to actually implement them differ, you have to, in any case, define
What freedom will I let the users have?
What services will I offer for programmers to customize the application?
and the most important thing:
How to enable this in my code while remaining secure and robust. This is usually done by sandboxing the code, validating inputs and potentially offering limited capabilities to the users.
In this context, hooks are predefined places in the code that call all the registered plugins' hook function, if defined, modifying the standard behavior of the application. For example, if you have a function that renders a background you can have
function renderBackground() {
foreach (Plugin p in getRegisteredPlugins()) {
if (p.rendersBackground) p.renderBackground();
}
//Standard background code if nothing got executed (or it still runs,
//according to needs)
}
In this case you have the 'renderBackground' hook that plugins can implement to change the background.
In an API way, the user application would call your service to get the background rendered
//other code
Background b = Salesforce2.AjaxRequest('getBackground',RGB(255,10,0));
//the app now has the result of calling you
This is all also related to the Hollywood principle, which is a good thing to apply, but sometimes it's just not practical.
The Plugin pattern from P of EAA is probably what you are after. Create a public interface for your service to which plugins (modules) can integrate to ad-hoc at runtime.
This is called a component architecture. It's really quite a big area, but some of the key important things here are:
composition of components (container components can contain any other component)
for example a grid should be able to contain other grids, or any other components
programming by interface (components are interacted with through known interfaces)
for example a view system that might ask a component to render itself (say in HTML, or it might be passed a render area and ask the view to draw into it directly
extensive use of dynamic registries (when a plugin is loaded, it registers itself with the appropriate registries)
a system for passing events to components (such as mouse clicks, cursor enter etc)
a notification
user management
and much much more!
If you're hosting the application, publish (and dogfood) a RESTful API.
If you're distributing software, look at OSGi.
Here's a small video that at least will give you some hints; the Lego Process [less than 2 minutes long]
There's also a complete recipe for how to create your own framework based extensively on Modularization...
The most important key element to make modularized software is to remember that it's purely [mostly] a matter of how loosely coupled you can create your systems. The more loosely coupled, the easier it is to modularize...