Since upgrading #spartacus/core I've noticed that interceptors (among a few other injectables) have been aliased with prefixes of the Theta ɵ character.
I've noticed the Angular team typically does the same for internals that are not intended for override or extension. I presume that's what the spartacus team is going for too?
May I ask why these interceptors are regarded as internals?
We're currently integrating with our own separate OIDC server and in order to properly achieve this we've needed to override a few of the auth interceptors, namely UserTokenInterceptor and ClientTokenInterceptor.
Is there an alternative recommended approach or would we have to downgrade #spartacus/core?
We could make sure to check the Theta character alias token we're overriding each time we bump #spartacus/core too though this is kind of against the grain - I noticed the aliases change when we bumped spartacus to 2.x.
I've toyed with the idea of creating an overarching null interceptor to nullify other spartacus interceptors effects, though that's probably not possible for interceptors like the ClientTokenInterceptor that make their own network requests preflight.
We recently started to work on authorization module to support different OAuth flows and to make integration with external identity providers easier. With that changes we already thought about interceptors and there will be a couple of changes in them and they will be exported in the public #spartacus/core API. At the moment we target 3.0 release with the changes, but I can't promise any specific date.
Related
I am working on APIs in dotnet core 2.2 and I'd like to version my API.
I'm looking for some solutions except:
Routing method (api/v1/controller, api/v2/contoller)
Routing method using APIVersioning package, (api/v{version: apiVersion}/contoller})
I want to know if there is any other solutions in which I don't have to change the route attribute? I might be completely wrong but can I use middleware? Like, map delegate to map the the incoming requests (based on v1, v2 it carries) to its right controller?
I'll highly appreciate any feedback and suggestions.
You can use the APIVersioning package and configure it so it selects the version based on the HTTP Header.
services.AddApiVersioning(c =>
{
c.ApiVersionReader = new HeaderApiVersionReader("api-version");
}
And then you can use the [ApiVersion] attribute on your controllers.
Can you use custom middleware - yes; however, be advised that endpoint selection is typically much more involved. The routing system provides extension and customization points, which is exactly what API Versioning does for you. Creating your own versioning solution will be a lot more involved than having to add a route template parameter.
If you're going to version by URL segment, then API Versioning requires that you use the ApiVersionRouteConstraint. The default name is registered as apiVersion, but you can change it via ApiVersioningOptions.RouteConstraintName. The route parameter name itself is user-defined. You can use whatever name you want, but version is common and clear in meaning.
Why is a route constraint required at all? API Versioning needs to resolve an API version from the request, but it has no knowledge or understanding of your route templates. For example, how would ASP.NET know that the route parameter id in values/{id:int} has be an integer without the int constraint? Short answer - it doesn't. The API version works the same way. You can compose the route template however you want and API versioning knows how and where to extract the value using the route constraint. What API versioning absolutely does not do is magic string parsing. This is a very intentional design decision. There is no attempt made by API Versioning to try and auto-magically extract or parse the API version value from the request URL. It's also important to note that the v prefix is very common for URL segment versioning, but it's not part of the API version. The approach of using a route constraint negates the need for API Versioning to worry about a v prefix. You can include it in your route template as a literal, if you want to.
If the issue or concern is having to repeatedly include the API version constraint in your route templates, it really isn't any different than including the api/ prefix in every template (which I presume you are doing). It is fairly easy to remain DRY by using one of the following, which could include the prefix api/v{version:apiVersion} for all API routes:
Extend the RouteAttribute and prepend all templates with the prefix; this is the simplest
Roll your own attribute and implement IRouteTemplateProvider
Ultimately, this requirement is yet another consequence of versioning by URL segment, which is why I don't recommend it. URL segment versioning is the least RESTful of all versioning methods (if you care about that) because it violates the Uniform Interface constraint. All other out-of-the-box supported versioning methods do not have this issue.
For clarity, the out-of-the-box supported methods are:
By query string (default)
By header
By media type (most RESTful)
By URL segment
Composition of n methods (ex: query string + header)
You can also create your own method by implementing the IApiVersionReader.
Attributes are just one way that API versions can be applied. In other words, you don't have to use the [ApiVersion] attribute if you don't want to. You can also use the conventions fluent API or create your own IControllerConvention. The VersionByNamespaceConvention is an example of such a convention that derives the API version from the containing .NET namespace. The methods by which you can create and map your own conventions are nearly limitless.
At the moment our config on .net core looks like this
"RouteClaimsRequirement": { "Claim": "settings_read" },
Is it possible to add more claims like below.
"RouteClaimsRequirement": { "Claim": "settings_read,settings_admin" },
Otherwise people with admin permissions would end up getting 403 error.
We're dealing with the same issue currently. After a thorough investigation, we landed in a set of possible approaches. Whichever suits your stomach, it's up to you to decide.
The properly proper approach is implement API scopes on the client that connects to your system. Depending on the flavor of your security (we're using IdS4), this may make a lot of or none sense at all.
The semi-proper approach is to horse around implementing custom middleware in Ocelot moving the control logic to C# from the config YAML.
The less proper approach is to set up a bunch of sections in the YAML file, one for each endpoint, verb, path etc. dealing with each its own claim (single) value. This solution sucks, scales poorly and I hope that it actually fails either way.
We're currently discussing which of the two first approaches that is the most appropriate.
I'm thinking to use Swashbuckle for WebAPI Swagger documentation.
Was wondering if there is any performance impact to take into consideration?
There is a slight overhead during app startup but the thing you really need to look out for is that the HTTP call to request the UI/docs is an expensive and potentially long-running one. You therefore want to make sure you are caching your documentation. This is now possible by overriding the default swagger provider in your SwaggerConfig.cs Register method:
c.CustomProvider(defaultProvider => new CachingSwaggerProvider(defaultProvider));
And then implementing the ISwaggerProvider interface in your CachingSwaggerProvider class. See an example class in the GitHub documentation
I'm building an Ember application with ember-cli and, as a persistence layer, an HTTP API using rails-api + Grape + ActiveModelSerializer. I am at a very basic stage but I want to setup my front-end and back-end in as much standard and clean way as possible before going on with developing further API and ember models.
I could not find a comprensive guide about serialization and deserialization made by the store but I read the documentation about DS.ActiveModelSerializer and DS.ActiveModelAdapter (which says the same things!) along with their parent classes.
What are the exact roles of adapter and serializer and how are they related?
Considering the tools I am using do I need to implement both of them?
Both Grape/ActiveModelSerializer and EmberData offer customization. As my back-end and front-end are for each other and not for anything else which side is it better to customize?
Hmmm...which side is better is subjective, so this is sort of my thought process:
Generally speaking, one would want an API that is able to "talk to anything" in case a device client is required or in case the API gets to be consumed by other parties in the future, so that would suggest that you'd config your Ember App to talk to your backend. But again, I think this is a subjective question/answer 'cause no one but you and your team can tell what's good for a given scenario you are or might be experiencing while the app gets created.
I think the guides explain the Adapter and Serializer role/usage and customization pretty decently these days.
As for implementing them, it may be necessary to create an adapter for your application to define a global namespace if you have one (if your controllers are behind another area like localhost:3000/api/products, then set namespace: 'api' otherwise this is not necessary), or similarly the host if you're using cors, and if you're doing with cli you might want to set the security policy in the environment to allow connections to other domains for cors and stuff like that. This can be done per model as well. But again, all of this is subjective as it depends on what you want/need to achieve.
We have an established WCF SOAP service. Its interface is defined in WSDL, from which C# classes are generated for our server (customers generate client-side bindings in various languages, from the same WSDL). The WSDL has a current version, which we can change a bit, and old versions, which we can't change or drop without a deprecation period, consultation etc. The SOAP requests tend to be complicated, having multiple XML namespaces within the same request.
The WCF SOAP service has a lot of "smarts" in it, and provides exactly the kinds of fetching and reporting facilities that we need for a new Web application that we need to make. We hope to use AngularJS for the client side of that. But these complex SOAP requests aren't easy to make in JavaScript world. If only we had a REST service, we could use angular Resource service. If not that, then a server that spoke JSON, albeit in an RPC style like SOAP, would run a fairly close second.
I've had various ideas for how the impedance mismatch between our server and client might be mitigated. But nothing sounds quick or easy.
I've thought of: -
Write a new REST service. Exactly what the client-side wants, but a serious piece of new development.
WebHttpBinding looks to offer something. But seems to me like it requires C# markup of custom attribute (how to achieve when our C# is generated from WSDL) and possibly wouldn't support our complex types
Obtain or write loads of client-side JS to abstract away calling SOAP services. But, unless this can be auto-generated from the WSDL, it's a huge amount of client-side code to write.
Write an IDispatchMessageFormatter for the server, to accept some JSON format of messages that I invent. Sounds hard, especially as good examples of people implementing and integrating IDispatchMessageFormatter seem hard to come by.
Write a MessageEncoder to swap between JSON and XML. But this isn't really an encoding operation, as became very clear when I tried to write it!
I'm searching for suggestions.
Generally, I recommend a REST service for any AngularJS development and have wrapped a number of legacy systems with Node.js API servers. Of course there is a massive amount of "it depends", but generally most projects will be happier and more productive following that route.
Some Things To Think About
How well does your current SOAP API fit the user interface requirements?
Are you experienced with Express, Sinatra, Flask or other micro-framework that allow rapid development of REST APIs? I find I can build a solid Node.js Express API server in a couple of hours and then extend it as I build the AngularJS application out.
How experienced are you with AngularJS? It's a more advanced project to build a complex data layer client-side.
Six Reasons Why REST is Important for AngularJS
It's much faster to write Angular code using $resource and $http. Get the API right is a good recommendation for effective AngularJS development. Indeed, you could argue that AngularJS is designed for REST, and that's why plain JavaScript works for the model (see 2).
Angular's plain-old JavaScript object data model works well with a REST API that speaks JSON that matches the user interface. However, issues arise when there isn't a good fit- Angular doesn't have a formal data model so you end up writing an lot of code trying to rationalize your API to work well with Angular. 3rd party libraries like breeze.js may offer some solution, but it's still awkward.
You can scale easily with caching. It's easy to add Redis or memcache or Varnish or other common HTTP caching solutions into the mix. Resource-based abstractions are perfect for caching strategies due to the transparency and idempotency of a REST api.
Loose coupling of front-end and server- it will be easier to support changes to the backend if you migrate off SOAP or need to integrate with other services.
It's generally easier to test JSON APIs separately from AngularJS logic, so your test suites will be simpler and more effective.
Your new REST API will be easier to leverage for future AngularJS and JSON-oriented projects.
I hope that helps.
Cheers,
Nick