How to implement a Restlet JAX-RS handler which is a thin proxy to a RESTful API, possibly implemented in the same java process? - jax-rs

We have two RESTful APIs - one is internal and another one is public, the two being implemented by different jars. The public API sort of wraps the internal one, performing the following steps:
Do some work
Call internal API
Do some work
Return the response to the user
It may happen (though not necessarily) that the two jars run in the same Java process.
We are using Restlet with the JAX-RS extension.
Here is an example of a simple public API implementation, which just forwards to the internal API:
#PUT
#Path("abc")
public MyResult method1(#Context UriInfo uriInfo, InputStream body) throws Exception {
String url = uriInfo.getAbsolutePath().toString().replace("/api/", "/internalapi/");
RestletClientResponse<MyResult> reply = WebClient.put(url, body, MyResult.class);
RestletUtils.addResponseHeaders(reply.responseHeaders);
return reply.returnObject;
}
Where WebClient.put is:
public class WebClient {
public static <T> RestletClientResponse<T> put(String url, Object body, Class<T> returnType) throws Exception {
Response restletResponse = Response.getCurrent();
ClientResource resource = new ClientResource(url);
Representation reply = null;
try {
Client timeoutClient = new Client(Protocol.HTTP);
timeoutClient.setConnectTimeout(30000);
resource.setNext(timeoutClient);
reply = resource.put(body, MediaType.APPLICATION_JSON);
T result = new JacksonConverter().toObject(new JacksonRepresentation<T>(reply, returnType), returnType, resource);
Status status = resource.getStatus();
return new RestletClientResponse<T>(result, (Form)resource.getResponseAttributes().get(HeaderConstants.ATTRIBUTE_HEADERS), status);
} finally {
if (reply != null) {
reply.release();
}
resource.release();
Response.setCurrent(restletResponse);
}
}
}
and RestletClientResponse<T> is:
public class RestletClientResponse<T> {
public T returnObject = null;
public Form responseHeaders = null;
public Status status = null;
public RestletClientResponse(T returnObject, Form responseHeaders, Status status) {
this.returnObject = returnObject;
this.responseHeaders = responseHeaders;
this.status = status;
}
}
and RestletUtils.addResponseHeaders is:
public class RestletUtils {
public static void addResponseHeader(String key, Object value) {
Form responseHeaders = (Form)org.restlet.Response.getCurrent().getAttributes().get(HeaderConstants.ATTRIBUTE_HEADERS);
if (responseHeaders == null) {
responseHeaders = new Form();
org.restlet.Response.getCurrent().getAttributes().put(HeaderConstants.ATTRIBUTE_HEADERS, responseHeaders);
}
responseHeaders.add(key, value.toString());
}
public static void addResponseHeaders(Form responseHeaders) {
for (String headerKey : responseHeaders.getNames()) {
RestletUtils.addResponseHeader(headerKey, responseHeaders.getValues(headerKey));
}
}
}
The problem is that if the two jars run in the same Java process, then an exception thrown from the internal API is not routed to the JAX-RS exception mapper of the internal API - the exception propagates up to the public API and is translated to the Internal Server Error (500).
Which means I am doing it wrong. So, my question is how do I invoke the internal RESTful API from within the public API implementation given the constraint that both the client and the server may run in the same Java process.
Surely, there are other problems, but I have a feeling that fixing the one I have just described is going to fix others as well.

The problem has nothing to do with the fact that both internal and public JARs are in the same JVM. They are perfectly separated by WebResource.put() method, which creates a new HTTP session. So, an exception in the internal API doesn't propagate to the public API.
The internal server error in the public API is caused by the post-processing mechanism, which interprets the output of the internal API and crashes for some reason. Don't blame the internal API, it is perfectly isolated and can't cause any troubles (even though it's in the same JVM).

Related

Webflux security authorisation test with bearer token (JWT) and custom claim

I have a Spring Boot (2.3.6.RELEASE) service that is acting as a resource server, it has been implemented using Webflux, client jwts are provided by a third party identity server.
I am attempting to test the security of the endpoints using JUnit 5 and #SpringBootTest. (For the record security appears to work as required during manual testing)
I am mutating the WebTestClient to include a JWT with an appropriate claim (myClaim), however in my custom ReactiveAuthorizationManager there is no bearer token in the requests header, thus with nothing to decode or claim to validate the request fails authorisation, as it should.
My test setup is thus:
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
#ActiveProfiles("test")
class ControllerTest {
#Autowired
private ApplicationContext applicationContext;
private WebTestClient webTestClient;
#BeforeEach
void init() {
webTestClient = WebTestClient
.bindToApplicationContext(applicationContext)
.apply(springSecurity())
.configureClient()
.build();
}
#Test
void willAllowAccessForJwtWithValidClaim() {
webTestClient.mutateWith(mockJwt().jwt(jwt -> jwt.claim("myClaim", "{myValue}")))
.get()
.uri("/securedEndpoint")
.exchange()
.expectStatus()
.isOk();
}
}
I have been attempting to follow this guide
I have tried the client with and without .filter(basicAuthentication()) just in case :)
It appears to me that the mockJwt() isint being put into the requests Authorization header field.
I also think that the ReactiveJwtDecoder being injected into my ReactiveAuthorizationManager will attempt to decode the test JWT against the identity provider which will fail.
I could mock the ReactiveAuthorizationManager or the ReativeJwtDecoder.
Is there anything I am missing?
Perhaps there is a way to create "test" JWTs using the Identity Services JWK set uri?
Additional detail:
Details of the ReactiveAuthorizationManager and Security Config
public class MyReactiveAuthorizationManager implements ReactiveAuthorizationManager<AuthorizationContext> {
private static final AuthorizationDecision UNAUTHORISED = new AuthorizationDecision(false);
private final ReactiveJwtDecoder jwtDecoder;
public JwtRoleReactiveAuthorizationManager(final ReactiveJwtDecoder jwtDecoder) {
this.jwtDecoder = jwtDecoder;
}
#Override
public Mono<AuthorizationDecision> check(final Mono<Authentication> authentication, final AuthorizationContext context) {
final ServerWebExchange exchange = context.getExchange();
if (null == exchange) {
return Mono.just(UNAUTHORISED);
}
final List<String> authorisationHeaders = exchange.getRequest().getHeaders().getOrEmpty(HttpHeaders.AUTHORIZATION);
if (authorisationHeaders.isEmpty()) {
return Mono.just(UNAUTHORISED);
}
final String bearer = authorisationHeaders.get(0);
return jwtDecoder.decode(bearer.replace("Bearer ", ""))
.flatMap(jwt -> determineAuthorisation(jwt.getClaimAsStringList("myClaim")));
}
private Mono<AuthorizationDecision> determineAuthorisation(final List<String> claimValues) {
if (Objects.isNull(claimValues)) {
return Mono.just(UNAUTHORISED);
} else {
return Mono.just(new AuthorizationDecision(!Collections.disjoint(claimValues, List.of("myValues")));
}
}
}
#EnableWebFluxSecurity
public class JwtSecurityConfig {
#Bean
public SecurityWebFilterChain configure(final ServerHttpSecurity http,
final ReactiveAuthorizationManager reactiveAuthorizationManager) {
http
.csrf().disable()
.logout().disable()
.authorizeExchange().pathMatchers("/securedEndpoint").access(reactiveAuthorizationManager)
.anyExchange().permitAll()
.and()
.oauth2ResourceServer()
.jwt();
return http.build();
}
}
Loosely speaking, it turns out that what I am actually doing is using a custom claim as an "Authority", that is saying "myClaim" must contain a value of "x" to allow access to a given path.
This is a little different to the claim being a simple custom claim, i.e. an additional bit of data (a users preferred colour scheme perhaps) in the token.
With that in mind I realised that the behaviour I was observing under testing was probably correct, so instead of implementing a ReactiveAuthorizationManager I choose to configure a ReactiveJwtAuthenticationConverter:
#Bean
public ReactiveJwtAuthenticationConverter jwtAuthenticationConverter() {
final JwtGrantedAuthoritiesConverter converter = new JwtGrantedAuthoritiesConverter();
converter.setAuthorityPrefix(""); // 1
converter.setAuthoritiesClaimName("myClaim");
final Converter<Jwt, Flux<GrantedAuthority>> rxConverter = new ReactiveJwtGrantedAuthoritiesConverterAdapter(converter);
final ReactiveJwtAuthenticationConverter jwtAuthenticationConverter = new ReactiveJwtAuthenticationConverter();
jwtAuthenticationConverter.setJwtGrantedAuthoritiesConverter(rxConverter);
return jwtAuthenticationConverter;
}
(Comment 1; The JwtGrantedAuthoritiesConverter prepends "SCOPE_" to the claim value, this can be controlled using setAuthorityPrefix see)
This required a tweak to the SecurityWebFilterChain configuration:
http
.csrf().disable()
.logout().disable()
.authorizeExchange().pathMatchers("securedEndpoint").hasAnyAuthority("myValue)
.anyExchange().permitAll()
.and()
.oauth2ResourceServer()
.jwt(jwt -> jwt.jwtAuthenticationConverter(jwtAuthenticationConverter));
Tests
#SpringBootTest
class ControllerTest {
private WebTestClient webTestClient;
#Autowired
public void setUp(final ApplicationContext applicationContext) {
webTestClient = WebTestClient
.bindToApplicationContext(applicationContext) // 2
.apply(springSecurity()) // 3
.configureClient()
.build();
}
#Test
void myTest() {
webTestClient
.mutateWith(mockJwt().authorities(new SimpleGrantedAuthority("myValue"))) // 4
.build()
.get()
.uri("/securedEndpoint")
.exchange()
.expectStatus()
.isOk()
}
}
To make the tests "work it appears that the WebTestClient needs to bind to the application context (at comment 2).
Ideally I would have prefered to have the WebTestClient bind to the server, however the apply(springSecurity()) (at comment 3) doesnt return an appropriate type for apply when using bindToServer
There are a number of different ways to "mock" the JWT when testing, the one used (at comment 4) for alternatives see the spring docs here
I hope this helps somebody else in the future, security and OAuth2 can be confusing :)
Thanks go #Toerktumlare for pointing me in the direction of useful documentation.

Spring WebFlux (Flux): how to publish dynamically

I am new to Reactive programming and Spring WebFlux. I want to make my App 1 publish Server Sent event through Flux and my App 2 listen on it continuously.
I want Flux publish on-demand (e.g. when something happens). All the example I found is to use Flux.interval to periodically publish event, and there seems no way to append/modify the content in Flux once it is created.
How can I achieve my goal? Or I am totally wrong conceptually.
Publish "dynamically" using FluxProcessor and FluxSink
One of the techniques to supply data manually to the Flux is using FluxProcessor#sink method as in the following example
#SpringBootApplication
#RestController
public class DemoApplication {
final FluxProcessor processor;
final FluxSink sink;
final AtomicLong counter;
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
public DemoApplication() {
this.processor = DirectProcessor.create().serialize();
this.sink = processor.sink();
this.counter = new AtomicLong();
}
#GetMapping("/send")
public void test() {
sink.next("Hello World #" + counter.getAndIncrement());
}
#RequestMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent> sse() {
return processor.map(e -> ServerSentEvent.builder(e).build());
}
}
Here, I created DirectProcessor in order to support multiple subscribers, that will listen to the data stream. Also, I provided additional FluxProcessor#serialize which provide safe support for multiproducer (invocation from different threads without violation of Reactive Streams spec rules, especially rule 1.3). Finally, by calling "http://localhost:8080/send" we will see the message Hello World #1 (of course, only in case if you connected to the "http://localhost:8080" previously)
Update For Reactor 3.4
With Reactor 3.4 you have a new API called reactor.core.publisher.Sinks. Sinks API offers a fluent builder for manual data-sending which lets you specify things like the number of elements in the stream and backpressure behavior, number of supported subscribers, and replay capabilities:
#SpringBootApplication
#RestController
public class DemoApplication {
final Sinks.Many sink;
final AtomicLong counter;
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
public DemoApplication() {
this.sink = Sinks.many().multicast().onBackpressureBuffer();
this.counter = new AtomicLong();
}
#GetMapping("/send")
public void test() {
EmitResult result = sink.tryEmitNext("Hello World #" + counter.getAndIncrement());
if (result.isFailure()) {
// do something here, since emission failed
}
}
#RequestMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent> sse() {
return sink.asFlux().map(e -> ServerSentEvent.builder(e).build());
}
}
Note, message sending via Sinks API introduces a new concept of emission and its result. The reason for such API is the fact that the Reactor extends Reactive-Streams and has to follow the backpressure control. That said if you emit more signals than was requested, and the underlying implementation does not support buffering, your message will not be delivered. Therefore, the result of tryEmitNext returns the EmitResult which indicates if the message was sent or not.
Also, note, that by default Sinsk API gives a serialized version of Sink, which means you don't have to care about concurrency. However, if you know in advance that the emission of the message is serial, you may build a Sinks.unsafe() version which does not serialize given messages
Just another idea, using EmitterProcessor as a gateway to flux
import reactor.core.publisher.EmitterProcessor;
import reactor.core.publisher.Flux;
public class MyEmitterProcessor {
EmitterProcessor<String> emitterProcessor;
public static void main(String args[]) {
MyEmitterProcessor myEmitterProcessor = new MyEmitterProcessor();
Flux<String> publisher = myEmitterProcessor.getPublisher();
myEmitterProcessor.onNext("A");
myEmitterProcessor.onNext("B");
myEmitterProcessor.onNext("C");
myEmitterProcessor.complete();
publisher.subscribe(x -> System.out.println(x));
}
public Flux<String> getPublisher() {
emitterProcessor = EmitterProcessor.create();
return emitterProcessor.map(x -> "consume: " + x);
}
public void onNext(String nextString) {
emitterProcessor.onNext(nextString);
}
public void complete() {
emitterProcessor.onComplete();
}
}
More info, see here from Reactor doc. There is a recommendation from the document itself that "Most of the time, you should try to avoid using a Processor. They are harder to use correctly and prone to some corner cases." BUT I don't know which kind of corner case.

Testng - Skip dependent tests for only failed data sets

I am attempting to modify my dependent tests so they are ran in a specific way and have yet find a way possible. For instance, say I have the following two tests and the defined data provider:
#Dataprovider(name = "apiResponses")
Public void queryApi(){
return getApiResponses().entrySet().stream().map(response -> new Object[]{response.getKey(), response.getValue()}).toArray(Object[][]::new);
}
#Test(dataprovider = "apiResponses")
Public void validateApiResponse(Object apiRequest, Object apiResponse){
if(apiResponse.statusCode != 200){
Assert.fail("Api Response must be that of a 200 to continue testing");
}
}
#Test(dataprovider = "apiResponses", dependsOnMethod="validateApiResponse")
Public void validateResponseContent(Object apiRequest, Object apiResponse){
//The following method contains the necessary assertions for validating api repsonse content
validateApiResponseData(apiResponse);
}
Say I have 100 api requests I want to validate, with the above, if a single one of those 100 requests were to return a status code of anything other than 200, then validateResponseContent would be skipped for all 100. What I'm attempting to achieve is that the dependent tests would be skipped for only the api responses that were to return without a status code of 200 and for all tests to be ran for responses that returned WITH a status code of 200.
You should be using a TestNG Factory which creates instances with both the apiRequest and apiResponse in it for each instance. Now each instance would basically first run an assertion on the status code before it moves on to validating the actual api response.
Here's a sample that shows how this would look like:
public class TestClassSample {
private Object apiRequest, apiResponse;
#Factory(dataProvider = "apiResponses")
public TestClassSample(Object apiRequest, Object apiResponse) {
this.apiRequest = apiRequest;
this.apiResponse = apiResponse;
}
#Test
public void validateApiResponse() {
Assert.assertEquals(apiResponse.statusCode, 200, "Api Response must be that of a 200 to continue testing");
}
#Test(dependsOnMethods = "validateApiResponse")
public void validateResponseContent() {
//The following method contains the necessary assertions for validating api repsonse content
validateApiResponseData(apiResponse);
}
#DataProvider(name = "apiResponses")
public static java.lang.Object[][] queryApi() {
return getApiResponses().entrySet()
.stream().map(
response -> new java.lang.Object[]{
response.getKey(), response.getValue()
})
.toArray(Object[][]::new);
}
}
Would'nt adding a if/else block solve this?
#Test(dataprovider = "apiResponses")
Public void validateApiResponse(Object apiRequest, Object apiResponse){
if(apiResponse.statusCode != 200){
Assert.fail("Api Response must be that of a 200 to continue testing");
} else {
validateApiResponseData(apiResponse);
}
}

MassTransit with RabbitMq Request/Response wrong reply address because of network segments

I have a web app that uses a request/response message in Masstransit.
This works on out test environment, no problem.
However on the customer deployment we face a problem. At the customer site we do have two network segments A and B. The component doing the database call is in segment A, the web app and the RabbitMq server in segment B.
Due to security restrictions the component in segment A has to go through a loadbalancer with a given address. The component itself can connect to RabbitMQ with Masstransit. So far so good.
The web component on segment B however uses the direct address for the RabbitMq server. When the web component now is starting the request/response call, I can see that the message arrives at the component in segment A.
However I see that the consumer tries to call the RabbitMQ server on the "wrong" address. It uses the address the web component uses to issue the request. However the component in segment A should reply on the "loadbalancer" address.
Is there a way to configure or tell the RespondAsync call to use the connection address configured for that component?
Of course the easiest would be to have the web component also connect through the loadbalancer, but due to the network segments/security setup the loadbalancer is only reachable from segment A.
Any input/help is appreciated.
I had a similar problem with rabbitmq federation. Here's what I did.
ResponseAddressSendObserver
class ResponseAddressSendObserver : ISendObserver
{
private readonly string _hostUriString;
public ResponseAddressSendObserver(string hostUriString)
{
_hostUriString = hostUriString;
}
public Task PreSend<T>(SendContext<T> context)
where T : class
{
if (context.ResponseAddress != null)
{
// Send relative response address alongside the message
context.Headers.Set("RelativeResponseAddress",
context.ResponseAddress.AbsoluteUri.Substring(_hostUriString.Length));
}
return Task.CompletedTask;
}
...
}
ResponseAddressConsumeFilter
class ResponseAddressConsumeFilter : IFilter<ConsumeContext>
{
private readonly string _hostUriString;
public ResponseAddressConsumeFilter(string hostUriString)
{
_hostUriString = hostUriString;
}
public Task Send(ConsumeContext context, IPipe<ConsumeContext> next)
{
var responseAddressOverride = GetResponseAddress(_hostUriString, context);
return next.Send(new ResponseAddressConsumeContext(responseAddressOverride, context));
}
public void Probe(ProbeContext context){}
private static Uri GetResponseAddress(string host, ConsumeContext context)
{
if (context.ResponseAddress == null)
return context.ResponseAddress;
object relativeResponseAddress;
if (!context.Headers.TryGetHeader("RelativeResponseAddress", out relativeResponseAddress) || !(relativeResponseAddress is string))
throw new InvalidOperationException("Message has ResponseAddress but doen't have RelativeResponseAddress header");
return new Uri(host + relativeResponseAddress);
}
}
ResponseAddressConsumeContext
class ResponseAddressConsumeContext : BaseConsumeContext
{
private readonly ConsumeContext _context;
public ResponseAddressConsumeContext(Uri responseAddressOverride, ConsumeContext context)
: base(context.ReceiveContext)
{
_context = context;
ResponseAddress = responseAddressOverride;
}
public override Uri ResponseAddress { get; }
public override bool TryGetMessage<T>(out ConsumeContext<T> consumeContext)
{
ConsumeContext<T> context;
if (_context.TryGetMessage(out context))
{
// the most hackish part in the whole arrangement
consumeContext = new MessageConsumeContext<T>(this, context.Message);
return true;
}
else
{
consumeContext = null;
return false;
}
}
// all other members just delegate to _context
}
And when configuring the bus
var result = MassTransit.Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri(hostAddress), h =>
{
h.Username(...);
h.Password(...);
});
cfg.UseFilter(new ResponseAddressConsumeFilter(hostAddress));
...
});
result.ConnectSendObserver(new ResponseAddressSendObserver(hostAddress));
So now relative response addresses are sent with the messages and used on the receiving side.
Using observers to modify anything is not recommended by the documentation, but should be fine in this case.
Maybe three is a better solution, but I haven't found one. HTH

Wrong Thread.CurrentPrincipal in async WCF end-method

I have a WCF service which has its Thread.CurrentPrincipal set in the ServiceConfiguration.ClaimsAuthorizationManager.
When I implement the service asynchronously like this:
public IAsyncResult BeginMethod1(AsyncCallback callback, object state)
{
// Audit log call (uses Thread.CurrentPrincipal)
var task = Task<int>.Factory.StartNew(this.WorkerFunction, state);
return task.ContinueWith(res => callback(task));
}
public string EndMethod1(IAsyncResult ar)
{
// Audit log result (uses Thread.CurrentPrincipal)
return ar.AsyncState as string;
}
private int WorkerFunction(object state)
{
// perform work
}
I find that the Thread.CurrentPrincipal is set to the correct ClaimsPrincipal in the Begin-method and also in the WorkerFunction, but in the End-method it's set to a GenericPrincipal.
I know I can enable ASP.NET compatibility for the service and use HttpContext.Current.User which has the correct principal in all methods, but I'd rather not do this.
Is there a way to force the Thread.CurrentPrincipal to the correct ClaimsPrincipal without turning on ASP.NET compatibility?
Starting with a summary of WCF extension points, you'll see the one that is expressly designed to solve your problem. It is called a CallContextInitializer. Take a look at this article which gives CallContextInitializer sample code.
If you make an ICallContextInitializer extension, you will be given control over both the BeginXXX thread context AND the EndXXX thread context. You are saying that the ClaimsAuthorizationManager has correctly established the user principal in your BeginXXX(...) method. In that case, you then make for yourself a custom ICallContextInitializer which either assigns or records the CurrentPrincipal, depending on whether it is handling your BeginXXX() or your EndXXX(). Something like:
public object BeforeInvoke(System.ServiceModel.InstanceContext instanceContext, System.ServiceModel.IClientChannel channel, System.ServiceModel.Channels.Message request){
object principal = null;
if (request.Properties.TryGetValue("userPrincipal", out principal))
{
//If we got here, it means we're about to call the EndXXX(...) method.
Thread.CurrentPrincipal = (IPrincipal)principal;
}
else
{
//If we got here, it means we're about to call the BeginXXX(...) method.
request.Properties["userPrincipal"] = Thread.CurrentPrincipal;
}
...
}
To clarify further, consider two cases. Suppose you implemented both an ICallContextInitializer and an IParameterInspector. Suppose that these hooks are expected to execute with a synchronous WCF service and with an async WCF service (which is your special case).
Below are the sequence of events and the explanation of what is happening:
Synchronous Case
ICallContextInitializer.BeforeInvoke();
IParemeterInspector.BeforeCall();
//...service executes...
IParameterInspector.AfterCall();
ICallContextInitializer.AfterInvoke();
Nothing surprising in the above code. But now look below at what happens with asynchronous service operations...
Asynchronous Case
ICallContextInitializer.BeforeInvoke(); //TryGetValue() fails, so this records the UserPrincipal.
IParameterInspector.BeforeCall();
//...Your BeginXXX() routine now executes...
ICallContextInitializer.AfterInvoke();
//...Now your Task async code executes (or finishes executing)...
ICallContextInitializercut.BeforeInvoke(); //TryGetValue succeeds, so this assigns the UserPrincipal.
//...Your EndXXX() routine now executes...
IParameterInspector.AfterCall();
ICallContextInitializer.AfterInvoke();
As you can see, the CallContextInitializer ensures you have opportunity to initialize values such as your CurrentPrincipal just before the EndXXX() routine runs. It therefore doesn't matter that the EndXXX() routine assuredly is executing on a different thread than did the BeginXXX() routine. And yes, the System.ServiceModel.Channels.Message object which is storing your user principal between Begin/End methods, is preserved and properly transmitted by WCF even though the thread changed.
Overall, this approach allows your EndXXX(IAsyncresult) to execute with the correct IPrincipal, without having to explicitly re-establish the CurrentPrincipal in the EndXXX() routine. And as with any WCF behavior, you can decide if this applies to individual operations, all operations on a contract, or all operations on an endpoint.
Not really the answer to my question, but an alternate approach of implementing the WCF service (in .NET 4.5) that does not exhibit the same issues with Thread.CurrentPrincipal.
public async Task<string> Method1()
{
// Audit log call (uses Thread.CurrentPrincipal)
try
{
return await Task.Factory.StartNew(() => this.WorkerFunction());
}
finally
{
// Audit log result (uses Thread.CurrentPrincipal)
}
}
private string WorkerFunction()
{
// perform work
return string.Empty;
}
The valid approach to this is to create an extension:
public class SLOperationContext : IExtension<OperationContext>
{
private readonly IDictionary<string, object> items;
private static ReaderWriterLockSlim _instanceLock = new ReaderWriterLockSlim();
private SLOperationContext()
{
items = new Dictionary<string, object>();
}
public IDictionary<string, object> Items
{
get { return items; }
}
public static SLOperationContext Current
{
get
{
SLOperationContext context = OperationContext.Current.Extensions.Find<SLOperationContext>();
if (context == null)
{
_instanceLock.EnterWriteLock();
context = new SLOperationContext();
OperationContext.Current.Extensions.Add(context);
_instanceLock.ExitWriteLock();
}
return context;
}
}
public void Attach(OperationContext owner) { }
public void Detach(OperationContext owner) { }
}
Now this extension is used as a container for objects that you want to persist between thread switching as OperationContext.Current will remain the same.
Now you can use this in BeginMethod1 to save current user:
SLOperationContext.Current.Items["Principal"] = OperationContext.Current.ClaimsPrincipal;
And then in EndMethod1 you can get the user by typing:
ClaimsPrincipal principal = SLOperationContext.Current.Items["Principal"];
EDIT (Another approach):
public IAsyncResult BeginMethod1(AsyncCallback callback, object state)
{
var task = Task.Factory.StartNew(this.WorkerFunction, state);
var ec = ExecutionContext.Capture();
return task.ContinueWith(res =>
ExecutionContext.Run(ec, (_) => callback(task), null));
}