Application Insights strategies for web api serving multiple clients - asp.net-core

We have a back end API, running ASP.Net Core, with two front ends: A SPA web site (Vuejs) and a progressive web page (for mobile users). The front ends are basically only client code and all services are on different domains. We don't use cookies as authentication uses bearer tokens.
We've been playing with Application Insights for monitoring, but as the documentation is not very descriptive for our situations, I would like to get some more inputs for what is the best strategy and possibilities for:
Tracking users and metrics without cookies from e.g. the button click in the applications to the server call, Entity Framework/SQL query (I see that this is currently not supported, How to enable dependency tracking with Application Insights in an Asp.Net Core project), processing data and presentation of the result on the client.
Separating calls from mobile and standard web in an easy manner in Application Insights queries. Any way to show this in the standard charts that show up initially would be beneficial.
Making sure that our strategy will also fit in situations where other external clients will access the API, and we should be able to identify these easily, and see how much load they are creating for the system.
Doing all of the above with the least amount of code.

this might be worthy of several independent questions if you want specifics on any of them. (and generally your last bullet is always implied, isn't it? :))
What have you tried so far? most of the "best way for you" kinds of things are going to be opinions though.
For general answers:
re: tracking users...
If you're already doing user info/auth for other purposes, you'd just set the various context.user.* fields with the info you have on the incoming request's telemetry context. all other telemetry that occurs using that same telemetry context would then inerit whatever user info you already have.
re: separating calls from mobile and standard...
if you're already doing this as different services/domains, and you are already using the same instrumentation key for both places, then the domain/host info of pageviews or requests is already there, you can filter/group on this in the portal or make custom queries in the analytics portal to analyze that way. if you know which site it is regardless of the host, you could add that as custom properties in the telemetry context, you could also do that to avoid dealing with host info.
re: external callers via an api
similarly, if you're already exposing an api and using auth, you should (ideally) already know who the inbound callers are, and you can set that info in custom properties as well.
In general, custom properties (string:string key value pairs) and custom metrics (string:double key value pairs) are your friends. you can set them on contexts so all the events generated in that context inherit the same properties, you can explicitly set them on individual TrackEvent (or any of the other Track* calls) to send specific properties/metrics with any single event.
You can also use telemetry initializers to augment or filter any telemetry that's being generated automatically (like requests or dependencies on the server side, or page views and ajax dependencies client side)

Related

RESTful API: Where should I code my workflow?

I am developing a RESTful API. This is my first API, but also my first really big coding project. As such, I'm still learning a lot about architecture etc.
Currently, I have my api setup in the following layers:
HTTP Layer
Resource Layer
Domain Model / Business Logic Layer
Data Access / Repository Layer
Persistent Storage / DB Layer
The issue I have run into at the moment is where do I need to put workflow objects / managers? By workflows, I mean code that evaluates what next step is required by the end user. For example, an e-commerce workflow. User adds item to basket, then checks out, then fills in personal details, then pays. The workflow would be responsible for deciding what steps are next, but also what steps are NOT allowed. For example, a user couldn't cause errors in the API by trying to pay before they have entered personal details (maybe they recall the URI for payments and try to skip a step). The workflow would check to see that all previous steps had been completed, if not, would not allow payment.
Currently, my workflow logic is in the Resource Layer. I am using hypermedia links to present the workflow to the user e.g. providing a 'next step' link. The problem I have with this is that the resource layer is a top level layer, and more aligned with presentation. I feel it needs to know too much about the underlying domain model to effectively evaluate a workflow i.e. it would need to know it has to check the personal_details entity before allowing payment.
This now leads me to thinking that workflows belong in the domain model. This does make a lot more sense, as really workflows are part of the business logic and I think are therefore best placed in the domain layer. After all, replace the Resource Layer with something else, and you would still need the underlying workflows.
But now the problem is that workflows required knowledge of several domain objects to complete their logic. It now feels right that it maybe goes in its own layer? Between Resource and Domain Layer?
HTTP Layer
Resource Layer
Workflow Layer
Domain Model / Business Logic Layer
Data Access / Repository Layer
Persistent Storage / DB Layer
Im just wondering if anyone had any other views or thoughts around this? As I said, I have no past application experience to know where workflows should be placed. Im really just learning this for the first time so want to make sure I'm going about it the right way.
Links to articles or blogs that cover this would be greatly appreciated. Love reading up on different implementations.
EDIT
To clarify, I release that HATEOAS allows the client to navigate through the 'workflow', but there must be something in my API that knows what links to show i.e. it is really defining the workflow that is allowed. It presents workflow related links in the resource, but additionally it validates requests are in sync with the workflow. Whilst I agree that a client will probably only follow the links provided in the resource, the danger (and beauty) of rest, is that its URI driven, so there is nothing stopping a mischievous client trying to 'skip' steps in the workflow by making an educated guess at the URI. The API needs to spot this and return a 302 response.
The answer to this question has taken me a fair bit of research, but basically the 'workflow' part has nothing to do with REST at all and more to do with the application layer.
My system was had the application logic and REST API too tightly coupled. I solved my problem by refactoring to reduce the coupling and now the workflow lives within the context of the application
REST encourages you to create a vocabulary of nouns (users, products, shopping carts) against an established set of verbs (GET, POST, PUT, DELETE). If you stick to this rule, then in your example the workflow really is defined by the set of interactions the user has with your site. It is how the user uses your app, which is really defined by the UI. Your REST services should react appropriately to invalid state requests, such as attempting to checkout with an empty cart, but the UI may also prevent such requests using script, which is an optional characteristic of REST.
For example, the UI which displays a product to the user might also display a link which would permit the user to add that product to their cart (POST shoppingcart/{productId}). The server really shouldn't care how the user got to that POST, only that it should add that product to the user's cart and return an updated representation of the cart to the user. The UI can then use javascript to determine whether or not to display a link to checkout only if the shopping cart has one or more items.
So it seems that your workflow lives outside the REST service and is rather defined by the navigation in your pages, which interact with your REST services as the user requests things. It's certainly possible that you might have internal workflows which must occur within your application based on the states setup by the user. But what you seem to be describing is a user interaction within the site, and while that's indeed a workflow, it seems better defined by your UI(s) than by a dedicated server-side component/layer.
You touch on the workflow (aka business logic) part of an API. Technically this is a separate concern from the API part which is the interface. Sure, as you mention, HATEOAS allows you to suggest certain actions which are valid, but you should be careful to maintain statelessness.
In REST applications, there must not be session state stored on the server side. Instead, it must be handled entirely by the client.
So, if there's session state on the server, it's not REST.
For your shopping cart example, you can save state in a separate caching layer like Redis. As for your workflows. You wouldn't want to put business logic like calculating their shopping cart or total bill in a domain model. That would be added to service layer.
You talked about mischievous users guessing URLs. This is always a concern and should be handled by your security. If the URL to delete a user is DELETE /user/3782 ... they can easily guess how to delete all the users. But you shouldn't rely only on obfuscating the URLs. You should have real security and access checks inside your endpoints checking if each request is valid.
This is the same solution for your shopping cart concerns You'll need to grant a token which will attach their shopping information and use that to validate each action, regardless if they knew the right URL or not. There are no shortcuts when it comes to security.
You may want to re-orient your architecture along the lines of DDD (Domain Driven Design) and perhaps use a MSA, that way you can shift from orchestrated workflow to EDA and choreography of micro processes.

Strategies for designing REST APIs for all types of client devices

The question is more targeted at server side development.
When writing a REST API, I want to write it in such a way that it can be consumed by both desktop and mobile applications.
Could see two possible approaches:
Each API should support pagination and the responsibility should be delegated to the client of how much data should be fetched in one go. So , mobile apps will ask for fewer pages in one go and desktop applications will ask for more.
Separate APIs for mobile devices hosted separately. The front-end web server can check the user agent (i.e. source from where is request is coming) and if it's a mobile device, then re-route the request to the server hosting the APIs for mobile devices.
Interested to know more strategies around this.
Appreciate your inputs.
I would suggest a bit of both (1) and (2), here's how.
Instead of re-building whole new api for mobile itself, Have adapters for all the supported devices. i.e have a layer on top of you REST API implementation which renders/instructs the underlying service to return the content suitable for selected mobile device.
coming to pagination, you can parameterize the pagination as an input from the Abstraction mentioned above.
I would recommend something closer to option (1). If the main difference between the clients will be the amount of data they request at a time, it seems trivial to add some kind of query parameter or HTTP header to the REST API indicating how many records to return, for instance.
Relying on checking the User-Agent header may require you to maintain a list of known client user agents and match against them, which would be an additional maintenance cost of a separate API.

Can client side mess with my API?

I have a website that revolves around transactions between two users. Each user needs to agree to the same terms. If I want an API so other websites can implement this into their own website, then I want to make sure that the other websites cannot mess with the process by including more fields in between or things that are irrelevant to my application. Is this possible?
If I was to implement such a thing, I would allow other websites to use tokens/URLs/widgets that would link them to my website. So, for example, website X wants to use my service to agree user A and B on the same terms. Their page will have an embedded form/frame which would be generated from my website and user B will also receive an email with link to my website's page (or a page of website X with a form/frame generated from my server).
Consider how different sites use eBay to enable users to pay. You buy everything on the site but when you are paying, either you are taken to ebay page and come back after payment, or the website has a small form/frame that is directly linked to ebay.
But this is my solution, one way of doing it. Hope this helps.
It depends on how your API is implemented. It takes considerably more work, thought, and engineering to build an API that can literally take any kind of data or to build an API that can take additional, named, key/value pairs as fields.
If you have implemented your API in this manner, then it's quite possible that users of this API could use it to extend functionality or build something slightly different by passing in additional data.
However, if your API is built to where specific values must be passed and these fields are required, then it becomes much more difficult for your API to be used in a manner that differs from what you originally intended.
For example, Google has many different API's for different purposes, and each API has a very specific number of required parameters that a developer must use in order to make a successful HTTP request. While the goal of these API's are to allow developers to extend functionality, they do allow access to only very specific pieces of data.
Lastly, you can use authentication to prevent unauthorized access to your API. The specific implementation details depend largely on the platform you're working with as well as how the API will be used. For instance, if users must login to use services provided by your API, then a form of OAuth may suffice. However, if other servers will consume your API, then the authorization will have to take place in the HTTP headers.
For more information on API best practices, see 7 Rules of Thumb When You Build an API, and a slideshow from a Google Engineer titled How to Design a Good API and Why That Matters.

Prevent Application changes breaking API

I have an application which I am currently writing an API. This is the first time I have created an API from start to finish and have read lots of good articles and how to do this. However a lot of that material focuses on the API development specifically (as it should) but have not found anything that touches on how to ensure the API doesn’t get broken by changes which happen within the application project.
My application consists of a ASP.NET MVC web app which makes calls to a Service Layer to undertake CRUD like operations. So to get a list of all the users in my app the MVC app calls the service layer and asks for them and is presented with a collection of users. My API (WCF Web API) also uses this service layer internally and when I request a list of users, again I get back a collection of users (JSON, XML etc).
However if for some reason another developer changes the underlying User domain object by renaming a field say surname to last name then this potentially is going to break my API as the Service Layer is going to return to my API a user object with a new field name when its expecting something else. My API does in fact have its own representation of objects which get mapped to the application objects when requested but this mapping will not map the surname property and will be returned as null.
Therefore do all changes in the app have to be strictly controlled because I provide an API? If so then do you have to change your app and API in tandem? What if changes are missed? The aforementioned doesn’t seem correct to me hence my post to seek greater knowledge.
Again I’m quite new to this so any help on this would be much appreciated.
It is inevitable that your application will evolve, if you can create new versions of an API as you applications evolve and support the older versions, then give notice of when older APIs will become obselete.
If you are owning the API design and you don't really want anyone to pollute your design. Introduce dedicate DTOs for your API use. Which be mapped from the underpinning domain models. But your presentation (via xml or json) won't change even underlying models change frequently.

What should a developer know before building an API for a community based website?

What things should a developer designing and implementing an API for a community based website know before starting the heavy coding? There are a bunch of APIs out there like Twitter API, Facebook API, Flickr API, etc which are all good examples. But how would you build your own API?
What technologies would you use? I think it's a good idea to use REST-like interface so that the API is accessible from different platforms/clients/browsers/command line tools (like curl). Am I right? I know that all the principles of web development should be met like caching, availability, scalability, security, protection against potential DOS attacks, validation, etc. And when it comes to APIs some of the most important things are backward compatibility and documentation. Am I missing something?
On the other hand, thinking from user's point of view (I mean the developer who is going to use your API), what would you look for in an API? Good documentation? Lots of code samples?
This question was inspired by Joel Coehoorn's question "What should a developer know before building a public web site?".
This question is a community wiki, so I hope you will help me put in one place all the things that should be addressed when building an API for a community based website.
If you really want to define a REST api, then do the following:
forget all technology issues other than HTTP and media types.
Identify the major use cases where a client will interact with the API
Write client code that perform those "use cases" against a hypothetical HTTP server. The only information that client should start with is the response from a GET request to the root API url. The client should identify the media-type of the response from the HTTP content-type header and it should parse the response. That response should contain links to other resources that allow the client to perform all of the APIs required operations.
When creating a REST api it is easier to think of it as a "user interface" for a machine rather than exposing an object model or process model. Imagine the machine navigating the api programmatically by retrieving a response, following a link, processing the response and following the next link. The client should never construct a URL based on its knowledge of how the server organizes resources.
How those links are formatted and identified is critical. The most important decision you will make in defining a REST API is your choice of media types. You either need to find standard ways of representing that link information (think Atom, microformats, atom link-relations, Html5 link relations) or if you have specialized needs and you don't need really wide reach to many clients, then you could create your own media-types.
Document how those media types are structured and what links/link-relations they may contain. Specific information about media types is critical to the client. Having a server return Content-Type:application/xml is useless to a client if it wants to do anything more than parse the response. The client cannot know what is contained in a response of type application/xml. Some people do believe you can use XML schema to define this but there are several disadvantages to this and it violates the REST "self-descriptive message" constraint.
Remember that what the URL looks like has absolutely no bearing on how the client should operate. The only exception to this, is that a media type may specify the use of templated URIs and may define parameters of those templates. The structure of the URL will become significant when it comes to choosing a server side framework. The server controls the URL structure, the client should not care. However, do not let the server side framework dictate how the client interacts with the API and be very cautious about choosing a framework that requires you to change your API. HTTP should be the only constraint regarding the client/server interaction.