Ocelot gateway support multiple claims in RouteClaimsRequirement - asp.net-core

At the moment our config on .net core looks like this
"RouteClaimsRequirement": { "Claim": "settings_read" },
Is it possible to add more claims like below.
"RouteClaimsRequirement": { "Claim": "settings_read,settings_admin" },
Otherwise people with admin permissions would end up getting 403 error.

We're dealing with the same issue currently. After a thorough investigation, we landed in a set of possible approaches. Whichever suits your stomach, it's up to you to decide.
The properly proper approach is implement API scopes on the client that connects to your system. Depending on the flavor of your security (we're using IdS4), this may make a lot of or none sense at all.
The semi-proper approach is to horse around implementing custom middleware in Ocelot moving the control logic to C# from the config YAML.
The less proper approach is to set up a bunch of sections in the YAML file, one for each endpoint, verb, path etc. dealing with each its own claim (single) value. This solution sucks, scales poorly and I hope that it actually fails either way.
We're currently discussing which of the two first approaches that is the most appropriate.

Related

NServiceBus Endpoint Routing Injection

I am trying to inject my own IRouteMessagesToEndpoints in NServiceBus with structure map as I need to redirect various messages to different endpoints depending on some business logic (not via namespace/assembly/type). This would allow it to fire using bus.Send(); and be configured to our requirements. I thought this was possible, but I can't seem to get it to work. I have tried using the Configure.Component() and ObjectFactory.Configure() for the injection, and both run without any exception, but when I debug my implementation of the interface the breakpoint does not hit.
My question is, can it be done this way (there's nothing on the internet that covers this)? I notice that the EndPointRouter in the GatewayReceiver has a setter, but I cannot work out how to access the property.
Unfortunately, even though IRouteMessagesToEndpoints is a public interface at the moment is not possible to replace the default implementation, sorry!
Please raise an issue about it in https://github.com/Particular/NServiceBus.Gateway/issues/new so we can discuss it better.

Ember adapter and serializer

I'm building an Ember application with ember-cli and, as a persistence layer, an HTTP API using rails-api + Grape + ActiveModelSerializer. I am at a very basic stage but I want to setup my front-end and back-end in as much standard and clean way as possible before going on with developing further API and ember models.
I could not find a comprensive guide about serialization and deserialization made by the store but I read the documentation about DS.ActiveModelSerializer and DS.ActiveModelAdapter (which says the same things!) along with their parent classes.
What are the exact roles of adapter and serializer and how are they related?
Considering the tools I am using do I need to implement both of them?
Both Grape/ActiveModelSerializer and EmberData offer customization. As my back-end and front-end are for each other and not for anything else which side is it better to customize?
Hmmm...which side is better is subjective, so this is sort of my thought process:
Generally speaking, one would want an API that is able to "talk to anything" in case a device client is required or in case the API gets to be consumed by other parties in the future, so that would suggest that you'd config your Ember App to talk to your backend. But again, I think this is a subjective question/answer 'cause no one but you and your team can tell what's good for a given scenario you are or might be experiencing while the app gets created.
I think the guides explain the Adapter and Serializer role/usage and customization pretty decently these days.
As for implementing them, it may be necessary to create an adapter for your application to define a global namespace if you have one (if your controllers are behind another area like localhost:3000/api/products, then set namespace: 'api' otherwise this is not necessary), or similarly the host if you're using cors, and if you're doing with cli you might want to set the security policy in the environment to allow connections to other domains for cors and stuff like that. This can be done per model as well. But again, all of this is subjective as it depends on what you want/need to achieve.

Existing SOAP service and new Angular Web App

We have an established WCF SOAP service. Its interface is defined in WSDL, from which C# classes are generated for our server (customers generate client-side bindings in various languages, from the same WSDL). The WSDL has a current version, which we can change a bit, and old versions, which we can't change or drop without a deprecation period, consultation etc. The SOAP requests tend to be complicated, having multiple XML namespaces within the same request.
The WCF SOAP service has a lot of "smarts" in it, and provides exactly the kinds of fetching and reporting facilities that we need for a new Web application that we need to make. We hope to use AngularJS for the client side of that. But these complex SOAP requests aren't easy to make in JavaScript world. If only we had a REST service, we could use angular Resource service. If not that, then a server that spoke JSON, albeit in an RPC style like SOAP, would run a fairly close second.
I've had various ideas for how the impedance mismatch between our server and client might be mitigated. But nothing sounds quick or easy.
I've thought of: -
Write a new REST service. Exactly what the client-side wants, but a serious piece of new development.
WebHttpBinding looks to offer something. But seems to me like it requires C# markup of custom attribute (how to achieve when our C# is generated from WSDL) and possibly wouldn't support our complex types
Obtain or write loads of client-side JS to abstract away calling SOAP services. But, unless this can be auto-generated from the WSDL, it's a huge amount of client-side code to write.
Write an IDispatchMessageFormatter for the server, to accept some JSON format of messages that I invent. Sounds hard, especially as good examples of people implementing and integrating IDispatchMessageFormatter seem hard to come by.
Write a MessageEncoder to swap between JSON and XML. But this isn't really an encoding operation, as became very clear when I tried to write it!
I'm searching for suggestions.
Generally, I recommend a REST service for any AngularJS development and have wrapped a number of legacy systems with Node.js API servers. Of course there is a massive amount of "it depends", but generally most projects will be happier and more productive following that route.
Some Things To Think About
How well does your current SOAP API fit the user interface requirements?
Are you experienced with Express, Sinatra, Flask or other micro-framework that allow rapid development of REST APIs? I find I can build a solid Node.js Express API server in a couple of hours and then extend it as I build the AngularJS application out.
How experienced are you with AngularJS? It's a more advanced project to build a complex data layer client-side.
Six Reasons Why REST is Important for AngularJS
It's much faster to write Angular code using $resource and $http. Get the API right is a good recommendation for effective AngularJS development. Indeed, you could argue that AngularJS is designed for REST, and that's why plain JavaScript works for the model (see 2).
Angular's plain-old JavaScript object data model works well with a REST API that speaks JSON that matches the user interface. However, issues arise when there isn't a good fit- Angular doesn't have a formal data model so you end up writing an lot of code trying to rationalize your API to work well with Angular. 3rd party libraries like breeze.js may offer some solution, but it's still awkward.
You can scale easily with caching. It's easy to add Redis or memcache or Varnish or other common HTTP caching solutions into the mix. Resource-based abstractions are perfect for caching strategies due to the transparency and idempotency of a REST api.
Loose coupling of front-end and server- it will be easier to support changes to the backend if you migrate off SOAP or need to integrate with other services.
It's generally easier to test JSON APIs separately from AngularJS logic, so your test suites will be simpler and more effective.
Your new REST API will be easier to leverage for future AngularJS and JSON-oriented projects.
I hope that helps.
Cheers,
Nick

WCF Rest - what are the best practices?

Just started my first WCF rest project and would like some help on what are the best practices for using REST.
I have seen a number of tutorials and there seems to be a number of ways to do things...for example if doing a POST, I have seen some tutorials which are setting HttpStatusCodes (OK/Errors etc), and other tutorials where they are just returning strings which contain result of the operation.
At the end of the day, there are 4 operations and surely there must be a guide that says if you are doing a GET, do it this way, etc and with a POST, do this...
Any help would be appreciated.
JD
UPDDATE
Use ASP.NET Web API.
OK I left the comment REST best practices: dont use WCF REST. Just avoid it like a plague and I feel like I have to explain it.
One of the fundamental flaws of the WCF is that it is concerned only with the Payload. For example Foo and Bar are the payloads here.
[OperationContract]
public Foo Do(Bar bar)
{
...
}
This is one of the tenants of WCF so that no matter what the transport is, we get the payload over to you.
But what it ignore is the context/envelope of the call which in many cases transport specific - so a lot of the context get's lost. In fact, HTTP's power lies in its context not payload and back in the earlier versions of WCF, there was no way to get the client's IP Address in netTcpBinding and WCF team were adamant that they cannot provide it. I cannot find the page now but remember reading the comments and the MS guys just said this is not supported.
Using WCF REST, you lose the flexibility of HTTP in expressing yourself clearly (and they had to budge it later) in terms of:
HTTP Status code
HTTP media types
ETag, ...
The new Web API, Glenn Block is working addresses this issue by encapsulating the payload in the context:
public HttpResponse<Foo> Do(HttpRequest<Bar> bar) // PSEUDOCODE
{
...
}
But to my test this is not perfect and I personally prefer to use frameworks such as Nancy or even plain ASP NET MVC to expose web API.
There are some basic rules when using the different HTTP verbs that come from the HTTP specification
GET: This is a pure read operation. Invocation must not cause state change in the service. The response to a GET may be delivered from cache (local, proxy, etc) depending on caching headers
DELETE: Used to delete a resource
There is sometimes some confusion around PUT and POST - which should be used when? To answer that you have to consider idempotency - whether the operation can be repeated without affecting service state - so for example setting a customer's name to a value can be repeated multiple times without further state change; however, if I am incrementing a customer's bank balance this cannot be safely be repeated without further state change on the service. The first is said to be idempotent the second is not
PUT: Non-delete state changes that are idempotent
POST: Non-delete state changes that are not idempotent
REST embraces HTTP - therefore failures should be communicated using HTTP status codes. 200 for success, 201 for creation and the service should return a URI for the new resource using the HTTP location header, 4xx are failures due to the nature of the client request (so can be fixed by the client changing what they are doing), 5xx are server errors that can only be resolved server side
There's something missing here that needs to be said.
WCF Rest may not be able to provide all functionality of REST protocol, but it is able to facilitate REST protocol for existing WCF services. So if you decide to provide some sort of REST support on top of the current SOAP/Named pipe protocol, it's the way to go if the ROI is low.
Hand rolling full blown REST protocol maybe ideal, but not always economical. In 90% of my projects, REST api is an afterthought. Wcf comes in quite handy in that regard.

How to Test an Undocumented Web Service?

I came across this question recently, could anyone please help me what should be my approach as a tester.
Suppose, there is a webservice whose functionality have been changed and there is no documentation available of the same. What will be your approach to test the same?
Update: Does the same answer hold if Database functionality changed and no documentation.
It seems you might be asking one of two different questions:
1) How to discover the API of a black-box web service.
In this case, the best source would be the source of the web-service (with the existence failure of the documentation), alternatively look at existing clients, or the ?wsdl of the service.
2) How to discover what are correct and incorrect responses from the web service.
For this you need either requirements, or documentation, or correct clients. Probably the most likely to exist in this case is a client. Alternatively the web-service might be implementing some function the results of which can be confirmed externally.
You can't test something with no documentation. How would you know what results to expect?
Maybe you're looking for "documentation" in the wrong place. Somebody made these changes. They had some information telling them what changes to make to the database and to the service. There may even be a requirements document, but maybe also some design documents.
Get those, and use them to figure out what changed. Use that information to decide how to change your tests.
If you are using the service in a useful way, then presumably you have some calls which return some known results, even though this may not be documented. If this is the case then I would write tests which validate my expectations of the service as it is currently. Then at least if changes are made you'll have more chance of knowing which bits have changed that affect you.
Generally speaking, a web service provides a consistent contract between the providing service and callers. It specifies that whilst the back-end implementation might change, the interface for the service will remain consistent.
If you are interested in discovering what functions are available for the service, it may well provide metadata that documents it's available functions and message types. Usually, this is accessible by appending "?wsdl" to the web service URL, although other schemes exist.
Once you have a good idea of the available functions, you can begin to invoke them through your testing framework and evaluating the responses in accordance with your usual test processes.