Jersey test - Http Delete method with JSON request - testing

I am using Jersey Test to test Rest service DELETE method:
#DELETE
#Path("/myPath")
#Consumes(MediaType.APPLICATION_JSON)
#Produces(MediaType.APPLICATION_JSON)
public MyResponse myMethod(MyRequest myRequest) {
I have tried the example below and other methods:
Entity<MyRequest> requestEntity = Entity.entity(new MyRequest(arg1, arg2), MediaType.APPLICATION_JSON);
target(MY_URI).request(MediaType.APPLICATION_JSON).method("DELETE", requestEntity)
and
target(MY_URI).request(MediaType.APPLICATION_JSON).build("DELETE", requestEntity).invoke();
But it does not work.
How to make Http Delete in Jersey test?

According to the HTTP specification
If a DELETE request includes an entity body, the body is ignored
Though a lot servers still support the entity body, I guess because of this Jersey considers the body as breaking HTTP Compliance. Jersey validates compliance with client requests. To get around this validation, you can set the client property
ClientProperties.SUPPRESS_HTTP_COMPLIANCE_VALIDATION
If true, the strict validation of HTTP specification compliance will be suppressed.
By default, Jersey client runtime performs certain HTTP compliance checks (such as which HTTP methods can facilitate non-empty request entities etc.) in order to fail fast with an exception when user tries to establish a communication non-compliant with HTTP specification. Users who need to override these compliance checks and avoid the exceptions being thrown by Jersey client runtime for some reason, can set this property to true. As a result, the compliance issues will be merely reported in a log and no exceptions will be thrown.
Note that the property suppresses the Jersey layer exceptions. Chances are that the non-compliant behavior will cause different set of exceptions being raised in the underlying I/O connector layer.
This property can be configured in a client runtime configuration or directly on an individual request. In case of conflict, request-specific property value takes precedence over value configured in the runtime configuration.
The default value is false.
To configure it in JerseyTest, you can do
#Override
public void configureClient(ClientConfig config) {
config.property(ClientProperties.SUPPRESS_HTTP_COMPLIANCE_VALIDATION, true);
}
Assuming you are making your requests by calling the target(..) method of the JerseyTest, the above configuration will be for all request. If you just want to remove the validation for certain requests, you can also set the property on the WebTarget and not do the above configuration.
target(...).property(...).request()...
EDIT
Another thing I might mention is that Grizzly is one of the servers that doesn't support the entity, unless configured. I'm not quite sure though how to configure that in JerseyTest. So if you are using the Grizzly test provider, it may not even work on the server side.
If this is the case, you try to use the in-memory test provider, or use the jetty provider
EDIT Grizzly Configuration
Edit provided by Emanuele Lombardi
You can configure the Grizzly test provider by using the following snippet:
#Override
protected TestContainerFactory getTestContainerFactory() throws TestContainerException {
return new TestContainerFactory() {
private final GrizzlyTestContainerFactory grizzlyTestContainerFactory = new GrizzlyTestContainerFactory();
#Override
public TestContainer create(URI baseUri, DeploymentContext deploymentContext) {
TestContainer testContainer = grizzlyTestContainerFactory.create(baseUri, deploymentContext);
try {
HttpServer server = (HttpServer) FieldUtils.readDeclaredField(testContainer, "server", true);
server.getServerConfiguration().setAllowPayloadForUndefinedHttpMethods(true);
} catch (IllegalAccessException e) {
fail(e.getMessage());
}
return testContainer;
}
};
}
The below method should be invoked before the GrizzlyServer starts.
server.getServerConfiguration().setAllowPayloadForUndefinedHttpMethods(true)‌​;
The server instance is retrieved using reflection (in this example through org.apache.commons.lang3.reflect.FieldUtils#readDeclaredField).
This code works until the server field name is not changed into GrizzlyTestContainerFactory#GrizzlyTestContainer, but seems a reasonable approach to me, at least in a unit test.

Related

Is it possible to have multiple handlers for the same exception leveraging RESTEasy's ExceptionMapper?

When handling RESTEasy exceptions, it is typically very straightforward to perform custom exception handling (in this case, the intent is to handle marshalling issues when receiving an HTTP request):
#Provider
class MissingKotlinParameterExceptionHandler : ExceptionMapper<MissingKotlinParameterException> {
override fun toResponse(exception: MissingKotlinParameterException?): Response {
println("my MissingKotlinParameterException mapper")
return Response.serverError().build()
}
}
The particular challenge I'm experiencing, however, is when the same exception is thrown from different endpoints. For example, having /service1/foo and /service2/bar, due to architect specifications, return completely separate error payloads. Is it possible to separate the implementations based on some sort of configuration, or package structure?
You can inject the resource info into the ExceptionMapper class using:
#Context ResourceInfo info; // this is the java version
Then in the toResponse use that field in order to determine the resource method that serviced the request.

ServiceStack: Reinstate pipeline when invoking a Service manually?

As a follow-up to this question, I wanted to understand how my invoking of a Service manually can be improved. This became longer than I wanted, but I feel the background info is needed.
When doing a pub/sub (broadcast), the normal sequence and flow in the Messaging API isn't used, and I instead get a callback when a pub/sub message is received, using IRedisClient, IRedisSubscription:
_subscription.OnMessage = (channel, msg) =>
{
onMessageReceived(ParseJsonMsgToPoco(msg));
};
The Action onMessageReceived will then, in turn, invoke a normal .NET/C# Event, like so:
protected override void OnMessageReceived(MyRequest request)
{
OnMyEvent?.Invoke(this, new RequestEventArgs(request));
}
This works, I get my request and all that, however, I would like it to be streamlined into the other flow, the flow in the Messaging API, meaning, the request finds its way into a Service class implementation, and that all normal boilerplate and dependency injection takes place as it would have using Messaging API.
So, in my Event handler, I manually invoke the Service:
private void Instance_OnMyEvent(object sender, RequestEventArgs e)
{
using (var myRequestService = HostContext.ResolveService<MyRequestService>(new BasicRequest()))
{
myRequestService.Any(e.Request);
}
}
and the MyRequestService is indeed found and Any called, and dependency injection works for the Service.
Question 1:
Methods such as OnBeforeExecute, OnAfterExecute etc, are not called, unless I manually call them, like: myRequestService.OnBeforeExecute(e) etc. What parts of the pipeline is lost? Can it be reinstated in some easy way, so I don't have to call each of them, in order, manually?
Question 2:
I think I am messing up the DI system when I do this:
using (var myRequestService = HostContext.ResolveService<MyRequestService>(new BasicRequest()))
{
myRequestService.OnBeforeExecute(e.Request);
myRequestService.Any(e.Request);
myRequestService.OnAfterExecute(e.Request);
}
The effect I see is that the injected dependencies that I have registered with container.AddScoped, isn't scoped, but seems static. I see this because I have a Guid inside the injected class, and that Guid is always the same in this case, when it should be different for each request.
container.AddScoped<IRedisCache, RedisCache>();
and the OnBeforeExecute (in a descendant to Service) is like:
public override void OnBeforeExecute(object requestDto)
{
base.OnBeforeExecute(requestDto);
IRedisCache cache = TryResolve<IRedisCache>();
cache?.SetGuid(Guid.NewGuid());
}
So, the IRedisCache Guid should be different each time, but it isn't. This however works fine when I use the Messaging API "from start to finish". It seems that if I call the TryResolve in the AppHostBase descendant, the AddScoped is ignored, and an instance is placed in the container, and then never removed.
What parts of the pipeline is lost?
None of the request pipeline is executed:
myRequestService.Any(e.Request);
Is physically only invoking the Any C# method of your MyRequestService class, it doesn't (nor cannot) do anything else.
The recommended way for invoking other Services during a Service Request is to use the Service Gateway.
But if you want to invoke a Service outside of a HTTP Request you can use the RPC Gateway for executing non-trusted services as it invokes the full Request Pipeline & converts HTTP Error responses into Typed Error Responses:
HostContext.AppHost.RpcGateway.ExecuteAsync()
For executing internal/trusted Services outside of a Service Request you can use HostContext.AppHost.ExecuteMessage as used by ServiceStack MQ which applies Message Request Request/Response Filters, Service Action Filters & Events.
I have registered with container.AddScoped
Do not use Request Scoped dependencies outside of a HTTP Request, use Singleton if the dependencies are ThreadSafe, otherwise register them as Transient. If you need to pass per-request storage pass them in IRequest.Items.

Rebus with RabbitMQ accept requests from Python

I am setting up a .NET core service that is reading from RabbitMQ using Rebus. It seems that the request placed in RabbitMQ needs to have the .NET object namespace information. Is there a way to work around this. For example if I had a service written in Python placing items on the queue would it be possible to read and process these requests. It seems every time I test and try to send something besides the .NET object I get an exception.
System.Collections.Generic.KeyNotFoundException: Could not find the key 'rbs2-content-type' - have the following keys only: 'rbs2-msg-id'
It depends on which serializer, you're using in the receiving end.
By default, Rebus will use its built-in JSON serializer with a fairly "helpful" setting, meaning that all .NET types names are included. This enables serialization of complex objects, including abstract/interface references, etc.
This serializer requires a few special headers to be present, though, e.g. the rbs2-content-type header, which it uses to verify that the incoming message presents itself as JSON (most likely by having application/json; charset=utf-8 as its content type).
If you want to enable deserialization of messages from other platforms, I suggest you provide the necessary headers on the messages (which – at least with Rebus' built-in serializer – also includes a .NET type name of the type to try to deserialize into).
Another option is to install a custom serializer, which is a fairly easy thing to do – you can get started by registering your serializer like this:
Configure.With(...)
.(...)
.Serialization(s => s.Register(c => new YourCrazySerializer()))
.Start();
which you then implement somewhat like this:
public class YourCrazySerializer : ISerializer
{
public async Task<TransportMessage> Serialize(Message message)
{
var headers = message.Headers.Clone();
// turn into byte[] here
//
// possibly add headers
return new TransportMessage(headers, bytes);
}
public async Task<Message> Deserialize(TransportMessage transportMessage)
{
var headers = transportMessage.Headers.Clone();
// turn into object here
//
// possibly remove headers
return new Message(headers);
}
}
As you can see, it's pretty easy to modify Rebus to accept messages from other systems.

Using javax.ws.rs.client.ClientBuilder in CXF to create Client, any route to be able to use local transport?

I work on a codebase that uses the standard "javax.ws.rs.client.ClientBuilder" class, from the CXF distribution, to configure and create a "javax.ws.rs.client.Client".
This works well enough.
I'm now trying to write tests that use JAXRSServerFactoryBean to manage a fake server using a controller defined by an inline class. I can set my host:port to localhost:something, both in the test and in the client configuration, and this works well enough to allow me to test our MessageBodyReaders and Http exception handling.
However, I think this won't be "scalable", as each fake server will have to run on a "dedicated" port (while running the test, at least). I can try to use uncommon ports, and have different tests use different ports, or use random numbers, but that's all somewhat risky. I don't really want CI builds to fail because tests running in parallel ended up using the same port.
I read about the ability in CXF (not JAX-RS) to use "local transport" (https://cwiki.apache.org/confluence/display/CXF20DOC/JAXRS+Testing). It appears that might resolve my problem. I need to verify this, but it's possible that two tests running in parallel both using local transport will not conflict.
However, I can't even get this to work yet, because our client code is using the "standard" JAX-RS client class, not the CXF one. They appear to be different and incompatible.
At the point where I create the client, I tried to do this (just to see if it can work):
WebClient.getConfig(client).getRequestContext().put(LocalConduit.DIRECT_DISPATCH, Boolean.TRUE);
Unfortunately, this fails with "Not a valid Client" in "org.apache.cxf.jaxrs.client.WebClient.getConfig(Object)" because it needs to be an instance of "org.apache.cxf.jaxrs.client.Client", not javax.ws.rs.client.Client.
Is there any easy (or even possible) path forward here?
You can use ClientRequestFilters to unit test JAX-RS clients. Basically register a custom ClientRequestFilter to your Client object (or ClientBuilder) that mocks your response using the abortWith(Response) method on the ClientRequestContext object that is passed in to the filter method.
Something like this should work:
public MyMockRequestFilter implements ClientRequestFilter {
#Override
public void filter(ClientRequestContext requestContext) {
MyEntity entity = // get the entity you want to mock as returned from the server
requestContext.abortWith(Request.ok(entity).build());
}
}
...
ClientBuilder builder = ClientBuilder.newBuilder().register(MyMockRequestFilter .class)
Hope this helps,
Andy

spring-security: authorization without authentication

I'm trying to integrate Spring Security in my web application. It seems pretty easy to do as long as you integrate the whole process of authentication and authorization.
However, both authentication and authorization seem so coupled that it's being very time-consuming for me to understand how I could split these processes, and get authentication independently of authorization.
The authentication process is external to our system (based on single sign-on) and this cannot be modified. Nevertheless, once the user succeeds this process, it's loaded in the session, including roles.
What we are trying to achieve is to make use of this information for the authorization process of Spring Security, that's to say, to force it to get the roles from the user session instead of picking it up through the authentication-provider.
Is there any way to achieve this?
If your authentication is already done using an SSO service, then you should use one of spring security's pre-authentication filters. Then you can specify a UserDetails service (possibly custom) that will use the pre-authenticated user principle to populate the GrantedAuthority's
SpringSecurity includes several pre-authentication filters including J2eePreAuthenticatedProcessingFilter and RequestHeaderPreAuthenticatedProcessingFilter. If you can't find one that works for you, its also possible, and not that hard to write your own, provided you know where in the request your SSO implementation stuffs the data. (That depends on the implementation of course.)
Just implement the Filter interface and do something like this in the doFilter method:
public void doFilter(ServletRequest request, ServletResponse response,
FilterChain chain) throws IOException, ServletException {
// principal is set in here as a header or parameter. you need to find out
// what it's named to extract it
HttpServletRequest req = (HttpServletRequest) request;
if (SecurityContextHolder.getContext().getAuthentication() == null) {
// in here, get your principal, and populate the auth object with
// the right authorities
Authentication auth = doAuthentication(req);
SecurityContextHolder.getContext().setAuthentication(auth);
}
chain.doFilter(request, response);
}
Yes, it's possible. Spring Security (like most of the rest of Spring) is interface-driven so that you can plug in your own implementations selectively for different parts of the framework.
Update: Spring's authorisation and authentication mechanisms work together - the authentication mechanism will authenticate the user and insert various GrantedAuthority instances in the security context. These will then be checked by the authorisation machinery to allow/disallow certain operations.
Use nont's answer for the details on how to use pre-existing authentication. The details of how you get the details from your session (e.g. roles ) will of course depend on your specific setup. But if you put in the GrantedAuthority instances derived from the roles pre-populated in your session by your SSO system, you will be able to use them in your authorisation logic.
From the reference documentation (slightly edited, with my emphasis):
You can (and many users do) write
their own filters or MVC controllers
to provide interoperability with
authentication systems that are not
based on Spring Security. For example,
you might be using Container Managed
Authentication which makes the current
user available from a ThreadLocal or
JNDI location. Or you might work for a
company that has a legacy proprietary
authentication system, which is a
corporate "standard" over which you
have little control. In such
situations it's quite easy to get
Spring Security to work, and still
provide authorization capabilities.
All you need to do is write a filter
(or equivalent) that reads the
third-party user information from a
location, build an Spring
Security-specific Authentication
object, and put it onto the
SecurityContextHolder. It's quite easy
to do this, and it is a
fully-supported integration approach.
The server that handles the authentication should redirect the user to the application passing to it some kind of key (a token in CAS SSO). Then the application use the key to ask to the authentication server the username and roles associated. With this info create a security context that is passed to the authorization manager. This is a very simplified version of a SSO login workflow.
Take a look to CAS SSO and CAS 2 Architecture.
Tell me if you need more information.
we have had the same requirement where we had to use spring security for authorization purpose only. We were using Siteminder for authentication. You can find more details on how to use authorization part of spring security not authentication here at http://codersatwork.wordpress.com/2010/02/13/use-spring-security-for-authorization-only-not-for-authentication/
I have also added source code and test cases at http://code.google.com/p/spring-security-with-authorization-only/source/browse/
I am trying to understand CAS authentication with our own Authorization and was getting confused since the User object in Spring Security always expects the password to be filled in and we don't care about that in our scenario. After reading Surabh's post, it seems that the trick is to return a custom User object without the password filled in. I will try that out and see if it works in my case. Hopefully no other code in the chain will be expecting the Password in the User object.
I use the authorization by this:
Inject the authorization related bean into my own bean:
#Autowired
private AccessDecisionManager accessDecisionManager;
#Autowired
FilterSecurityInterceptor filterSecurityInterceptor;
Use this bean by this:
FilterInvocation fi = new FilterInvocation(rundata.getRequest(), rundata.getResponse(), new FilterChain() {
public void doFilter(ServletRequest arg0, ServletResponse arg1) throws IOException, ServletException {
// TODO Auto-generated method stub
}
});
FilterInvocationDefinitionSource objectDefinitionSource = filterSecurityInterceptor.getObjectDefinitionSource();
ConfigAttributeDefinition attr = objectDefinitionSource.getAttributes(fi);
Authentication authenticated = new Authentication() {
...........
public GrantedAuthority[] getAuthorities() {
GrantedAuthority[] result = new GrantedAuthority[1];
result[0] = new GrantedAuthorityImpl("ROLE_USER");
return result;
}
};
accessDecisionManager.decide(authenticated, fi, attr);
I too did spent lot of hours investigating on how to implement custom authorization without authentication.
The authentication process is external to our system (based on single sign-on).
I have done it, as mentioned below and it Works!!! (I am sure there are many other ways to it better, but this way just suits my scenario well enough)
Scenario : User is already authenticated by external system and all information needed for authorization is present in the request
1.
Security config need to be created, enabling global method security as below.
#Configuration
#EnableWebSecurity
#EnableGlobalMethodSecurity(securedEnabled = true, prePostEnabled = true)
class SpringWebSecurityConfig extends WebSecurityConfigurerAdapter {
#Override
protected void configure(final HttpSecurity http) throws Exception {
}
}
2.) Implement Spring PermissionEvaluator to authorize whether the request should be allowed or rejected
#Component
public class CustomPermissionEvaluator implements PermissionEvaluator {
public boolean authorize(final String groups, final String role) {
boolean allowed = false;
System.out.println("Authorizing: " + groups + "...");
if (groups.contains(role)) {
allowed = true;
System.out.println(" authorized!");
}
return allowed;
};
#Override
public boolean hasPermission(final Authentication authentication, final Object groups, final Object role) {
return authorize((String) groups, (String) role);
};
#Override
public boolean hasPermission(final Authentication authentication, final Serializable targetId, final String targetType, final Object permission) {
return authorize((String) targetId, (String) permission);
};
}
3.) Add MethodSecurityConfig
#Configuration
#EnableGlobalMethodSecurity(prePostEnabled = true)
public class MethodSecurityConfig extends GlobalMethodSecurityConfiguration {
#Override
protected MethodSecurityExpressionHandler createExpressionHandler() {
DefaultMethodSecurityExpressionHandler expressionHandler = new DefaultMethodSecurityExpressionHandler();
expressionHandler.setPermissionEvaluator(new CustomPermissionEvaluator());
return expressionHandler;
}
}
4.) Add #PreAuthorize in your controller as shown below. In this example, all the groups of the user are present in request header with key 'availableUserGroups'.
This is then passed on to the CustomPermissionEvaluator to verify authorization. Please note that spring automatically passes Authentication object to the method 'hasPermission'.
So in case if you want to load user and check using spring 'hasRole' method, then this can be used.
#PreAuthorize("hasPermission(#userGroups, 'ADMIN')")
#RequestMapping(value = "/getSomething")
public String getSomething(#RequestHeader(name = "availableUserGroups") final String userGroups) {
return "resource allowed to access";
}
Handling Other Scenarios :
1.) In scenario where you want to load the user before you can perform the authorization. You can use spring pre-authentication filters, and do it in a similar way.
Example link : http://www.learningthegoodstuff.com/2014/12/spring-security-pre-authentication-and.html