MassTransit with RabbitMq Request/Response wrong reply address because of network segments - rabbitmq

I have a web app that uses a request/response message in Masstransit.
This works on out test environment, no problem.
However on the customer deployment we face a problem. At the customer site we do have two network segments A and B. The component doing the database call is in segment A, the web app and the RabbitMq server in segment B.
Due to security restrictions the component in segment A has to go through a loadbalancer with a given address. The component itself can connect to RabbitMQ with Masstransit. So far so good.
The web component on segment B however uses the direct address for the RabbitMq server. When the web component now is starting the request/response call, I can see that the message arrives at the component in segment A.
However I see that the consumer tries to call the RabbitMQ server on the "wrong" address. It uses the address the web component uses to issue the request. However the component in segment A should reply on the "loadbalancer" address.
Is there a way to configure or tell the RespondAsync call to use the connection address configured for that component?
Of course the easiest would be to have the web component also connect through the loadbalancer, but due to the network segments/security setup the loadbalancer is only reachable from segment A.
Any input/help is appreciated.

I had a similar problem with rabbitmq federation. Here's what I did.
ResponseAddressSendObserver
class ResponseAddressSendObserver : ISendObserver
{
private readonly string _hostUriString;
public ResponseAddressSendObserver(string hostUriString)
{
_hostUriString = hostUriString;
}
public Task PreSend<T>(SendContext<T> context)
where T : class
{
if (context.ResponseAddress != null)
{
// Send relative response address alongside the message
context.Headers.Set("RelativeResponseAddress",
context.ResponseAddress.AbsoluteUri.Substring(_hostUriString.Length));
}
return Task.CompletedTask;
}
...
}
ResponseAddressConsumeFilter
class ResponseAddressConsumeFilter : IFilter<ConsumeContext>
{
private readonly string _hostUriString;
public ResponseAddressConsumeFilter(string hostUriString)
{
_hostUriString = hostUriString;
}
public Task Send(ConsumeContext context, IPipe<ConsumeContext> next)
{
var responseAddressOverride = GetResponseAddress(_hostUriString, context);
return next.Send(new ResponseAddressConsumeContext(responseAddressOverride, context));
}
public void Probe(ProbeContext context){}
private static Uri GetResponseAddress(string host, ConsumeContext context)
{
if (context.ResponseAddress == null)
return context.ResponseAddress;
object relativeResponseAddress;
if (!context.Headers.TryGetHeader("RelativeResponseAddress", out relativeResponseAddress) || !(relativeResponseAddress is string))
throw new InvalidOperationException("Message has ResponseAddress but doen't have RelativeResponseAddress header");
return new Uri(host + relativeResponseAddress);
}
}
ResponseAddressConsumeContext
class ResponseAddressConsumeContext : BaseConsumeContext
{
private readonly ConsumeContext _context;
public ResponseAddressConsumeContext(Uri responseAddressOverride, ConsumeContext context)
: base(context.ReceiveContext)
{
_context = context;
ResponseAddress = responseAddressOverride;
}
public override Uri ResponseAddress { get; }
public override bool TryGetMessage<T>(out ConsumeContext<T> consumeContext)
{
ConsumeContext<T> context;
if (_context.TryGetMessage(out context))
{
// the most hackish part in the whole arrangement
consumeContext = new MessageConsumeContext<T>(this, context.Message);
return true;
}
else
{
consumeContext = null;
return false;
}
}
// all other members just delegate to _context
}
And when configuring the bus
var result = MassTransit.Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri(hostAddress), h =>
{
h.Username(...);
h.Password(...);
});
cfg.UseFilter(new ResponseAddressConsumeFilter(hostAddress));
...
});
result.ConnectSendObserver(new ResponseAddressSendObserver(hostAddress));
So now relative response addresses are sent with the messages and used on the receiving side.
Using observers to modify anything is not recommended by the documentation, but should be fine in this case.
Maybe three is a better solution, but I haven't found one. HTH

Related

Spring Integration testing a Files.inboundAdapter flow

I have this flow that I am trying to test but nothing works as expected. The flow itself works well but testing seems a bit tricky.
This is my flow:
#Configuration
#RequiredArgsConstructor
public class FileInboundFlow {
private final ThreadPoolTaskExecutor threadPoolTaskExecutor;
private String filePath;
#Bean
public IntegrationFlow fileReaderFlow() {
return IntegrationFlows.from(Files.inboundAdapter(new File(this.filePath))
.filterFunction(...)
.preventDuplicates(false),
endpointConfigurer -> endpointConfigurer.poller(
Pollers.fixedDelay(500)
.taskExecutor(this.threadPoolTaskExecutor)
.maxMessagesPerPoll(15)))
.transform(new UnZipTransformer())
.enrichHeaders(this::headersEnricher)
.transform(Message.class, this::modifyMessagePayload)
.route(Map.class, this::channelsRouter)
.get();
}
private String channelsRouter(Map<String, File> payload) {
boolean isZip = payload.values()
.stream()
.anyMatch(file -> isZipFile(file));
return isZip ? ZIP_CHANNEL : XML_CHANNEL; // ZIP_CHANNEL and XML_CHANNEL are PublishSubscribeChannel
}
#Bean
public SubscribableChannel xmlChannel() {
var channel = new PublishSubscribeChannel(this.threadPoolTaskExecutor);
channel.setBeanName(XML_CHANNEL);
return channel;
}
#Bean
public SubscribableChannel zipChannel() {
var channel = new PublishSubscribeChannel(this.threadPoolTaskExecutor);
channel.setBeanName(ZIP_CHANNEL);
return channel;
}
//There is a #ServiceActivator on each channel
#ServiceActivator(inputChannel = XML_CHANNEL)
public void handleXml(Message<Map<String, File>> message) {
...
}
#ServiceActivator(inputChannel = ZIP_CHANNEL)
public void handleZip(Message<Map<String, File>> message) {
...
}
//Plus an #Transformer on the XML_CHANNEL
#Transformer(inputChannel = XML_CHANNEL, outputChannel = BUS_CHANNEL)
private List<BusData> xmlFileToIngestionMessagePayload(Map<String, File> xmlFilesByName) {
return xmlFilesByName.values()
.stream()
.map(...)
.collect(Collectors.toList());
}
}
I would like to test multiple cases, the first one is checking the message payload published on each channel after the end of fileReaderFlow.
So I defined this test classe:
#SpringBootTest
#SpringIntegrationTest
#ExtendWith(SpringExtension.class)
class FileInboundFlowTest {
#Autowired
private MockIntegrationContext mockIntegrationContext;
#TempDir
static Path localWorkDir;
#BeforeEach
void setUp() {
copyFileToTheFlowDir(); // here I copy a file to trigger the flow
}
#Test
void checkXmlChannelPayloadTest() throws InterruptedException {
Thread.sleep(1000); //waiting for the flow execution
PublishSubscribeChannel xmlChannel = this.getBean(XML_CHANNEL, PublishSubscribeChannel.class); // I extract the channel to listen to the message sent to it.
xmlChannel.subscribe(message -> {
assertThat(message.getPayload()).isInstanceOf(Map.class); // This is never executed
});
}
}
As expected that test does not work because the assertThat(message.getPayload()).isInstanceOf(Map.class); is never executed.
After reading the documentation I didn't find any hint to help me solved that issue. Any help would be appreciated! Thanks a lot
First of all that channel.setBeanName(XML_CHANNEL); does not effect the target bean. You do this on the bean creation phase and dependency injection container knows nothing about this setting: it just does not consult with it. If you really would like to dictate an XML_CHANNEL for bean name, you'd better look into the #Bean(name) attribute.
The problem in the test that you are missing the fact of async logic of the flow. That Files.inboundAdapter() works if fully different thread and emits messages outside of your test method. So, even if you could subscribe to the channel in time, before any message is emitted to it, that doesn't mean your test will work correctly: the assertThat() will be performed on a different thread. Therefore no real JUnit report for your test method context.
So, what I'd suggest to do is:
Have Files.inboundAdapter() stopped in the beginning of the test before any setup you'd like to do in the test. Or at least don't place files into that filePath, so the channel adapter doesn't emit messages.
Take the channel from the application context and if you wish subscribe or use a ChannelInterceptor.
Have an async barrier, e.g. CountDownLatch to pass to that subscriber.
Start the channel adapter or put file into the dir for scanning.
Wait for the async barrier before verifying some value or state.

stop polling files when rabbitmq is down: spring integration

I'm working on a project where we are polling files from a sftp server and streaming it out into a object on the rabbitmq queue. Now when the rabbitmq is down it still polls and deletes the file from the server and losses the file while sending it on queue when rabbitmq is down. I'm using ExpressionEvaluatingRequestHandlerAdvice to remove the file on successful transformation. My code looks like this:
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(sftpProperties.getSftpHost());
factory.setPort(sftpProperties.getSftpPort());
factory.setUser(sftpProperties.getSftpPathUser());
factory.setPassword(sftpProperties.getSftpPathPassword());
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
public SftpRemoteFileTemplate sftpRemoteFileTemplate() {
return new SftpRemoteFileTemplate(sftpSessionFactory());
}
#Bean
#InboundChannelAdapter(channel = TransformerChannel.TRANSFORMER_OUTPUT, autoStartup = "false",
poller = #Poller(value = "customPoller"))
public MessageSource<InputStream> sftpMessageSource() {
SftpStreamingMessageSource messageSource = new SftpStreamingMessageSource(sftpRemoteFileTemplate,
null);
messageSource.setRemoteDirectory(sftpProperties.getSftpDirPath());
messageSource.setFilter(new SftpPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(),
"streaming"));
messageSource.setFilter(new SftpSimplePatternFileListFilter("*.txt"));
return messageSource;
}
#Bean
#Transformer(inputChannel = TransformerChannel.TRANSFORMER_OUTPUT,
outputChannel = SFTPOutputChannel.SFTP_OUTPUT,
adviceChain = "deleteAdvice")
public org.springframework.integration.transformer.Transformer transformer() {
return new SFTPTransformerService("UTF-8");
}
#Bean
public ExpressionEvaluatingRequestHandlerAdvice deleteAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setOnSuccessExpressionString(
"#sftpRemoteFileTemplate.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])");
advice.setPropagateEvaluationFailures(false);
return advice;
}
I don't want the files to get removed/polled from the remote sftp server when the rabbitmq server is down. How can i achieve this ?
UPDATE
Apologies for not mentioning that I'm using spring cloud stream rabbit binder. And here is the transformer service:
public class SFTPTransformerService extends StreamTransformer {
public SFTPTransformerService(String charset) {
super(charset);
}
#Override
protected Object doTransform(Message<?> message) throws Exception {
String fileName = message.getHeaders().get("file_remoteFile", String.class);
Object fileContents = super.doTransform(message);
return new customFileDTO(fileName, (String) fileContents);
}
}
UPDATE-2
I added TransactionSynchronizationFactory on the customPoller as suggested. Now it doesn't poll file when rabbit server is down, but when the server is up, it keeps on polling the same file over and over again!! I cannot figure it out why? I guess i cannot use PollerSpec cause im on 4.3.2 version.
#Bean(name = "customPoller")
public PollerMetadata pollerMetadataDTX(StartStopTrigger startStopTrigger,
CustomTriggerAdvice customTriggerAdvice) {
PollerMetadata pollerMetadata = new PollerMetadata();
pollerMetadata.setAdviceChain(Collections.singletonList(customTriggerAdvice));
pollerMetadata.setTrigger(startStopTrigger);
pollerMetadata.setMaxMessagesPerPoll(Long.valueOf(sftpProperties.getMaxMessagePoll()));
ExpressionEvaluatingTransactionSynchronizationProcessor syncProcessor =
new ExpressionEvaluatingTransactionSynchronizationProcessor();
syncProcessor.setBeanFactory(applicationContext.getAutowireCapableBeanFactory());
syncProcessor.setBeforeCommitChannel(
applicationContext.getBean(TransformerChannel.TRANSFORMER_OUTPUT, MessageChannel.class));
syncProcessor
.setAfterCommitChannel(
applicationContext.getBean(SFTPOutputChannel.SFTP_OUTPUT, MessageChannel.class));
syncProcessor.setAfterCommitExpression(new SpelExpressionParser().parseExpression(
"#sftpRemoteFileTemplate.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])"));
DefaultTransactionSynchronizationFactory defaultTransactionSynchronizationFactory =
new DefaultTransactionSynchronizationFactory(syncProcessor);
pollerMetadata.setTransactionSynchronizationFactory(defaultTransactionSynchronizationFactory);
return pollerMetadata;
}
I don't know if you need this info but my CustomTriggerAdvice and StartStopTrigger looks like this :
#Component
public class CustomTriggerAdvice extends AbstractMessageSourceAdvice {
#Autowired private StartStopTrigger startStopTrigger;
#Override
public boolean beforeReceive(MessageSource<?> source) {
return true;
}
#Override
public Message<?> afterReceive(Message<?> result, MessageSource<?> source) {
if (result == null) {
if (startStopTrigger.getStart()) {
startStopTrigger.stop();
}
} else {
if (!startStopTrigger.getStart()) {
startStopTrigger.stop();
}
}
return result;
}
}
public class StartStopTrigger implements Trigger {
private PeriodicTrigger startTrigger;
private boolean start;
public StartStopTrigger(PeriodicTrigger startTrigger, boolean start) {
this.startTrigger = startTrigger;
this.start = start;
}
#Override
public Date nextExecutionTime(TriggerContext triggerContext) {
if (!start) {
return null;
}
start = true;
return startTrigger.nextExecutionTime(triggerContext);
}
public void stop() {
start = false;
}
public void start() {
start = true;
}
public boolean getStart() {
return this.start;
}
}
Well, would be great to see what your SFTPTransformerService and determine how it is possible to perform an onSuccessExpression when there should be an exception in case of down broker.
You also should not only throw an exception do not perform delete, but consider to add a RequestHandlerRetryAdvice to re-send the file to the RabbitMQ: https://docs.spring.io/spring-integration/docs/5.0.6.RELEASE/reference/html/messaging-endpoints-chapter.html#retry-advice
UPDATE
So, well, since Gary guessed that you use Spring Cloud Stream to send message to the Rabbit Binder after your internal process (very sad that you didn't share that information originally), you need to take a look to the Binder error handling on the matter: https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.RELEASE/reference/htmlsingle/#_retry_with_the_rabbitmq_binder
And that is true that ExpressionEvaluatingRequestHandlerAdvice is applied only for the SFTPTransformerService and nothing more. The downstream error (in the Binder) is not included in this process already.
UPDATE 2
Yeah... I think Gary is right, and we don't have choice unless configure a TransactionSynchronizationFactory on the customPoller level instead of that ExpressionEvaluatingRequestHandlerAdvice: ExpressionEvaluatingRequestHandlerAdvice .
The DefaultTransactionSynchronizationFactory can be configured with the ExpressionEvaluatingTransactionSynchronizationProcessor, which has similar goal as the mentioned ExpressionEvaluatingRequestHandlerAdvice, but on the transaction level which will include your process starting with the SFTP Channel Adapter and ending on the Rabbit Binder level with the send to AMQP attempts.
See Reference Manual for more information: https://docs.spring.io/spring-integration/reference/html/transactions.html#transaction-synchronization.
The point with the ExpressionEvaluatingRequestHandlerAdvice (and any AbstractRequestHandlerAdvice) that they have a boundary only around handleRequestMessage() method, therefore only during the component they are declared.

do we need sessions in WebRTC?

I am creating a sample project for learning purpose(later on I will be working on project based on webrtc and kurento), I am using Kurento media server with it, I have modified the tutorial of the kurento server and made one sample out of it.
In all of the samples for Kurento Server they are using a UserRegistry.java where they are storing objects of UserSession as shown below:
public class UserSession {
private static final Logger log = LoggerFactory.getLogger(UserSession.class);
private final String name;
private final WebSocketSession session;
private String sdpOffer;
private String callingTo;
private String callingFrom;
private WebRtcEndpoint webRtcEndpoint;
private WebRtcEndpoint playingWebRtcEndpoint;
private final List<IceCandidate> candidateList = new ArrayList<>();
public UserSession(WebSocketSession session, String name) {
this.session = session;
this.name = name;
}
public void sendMessage(JsonObject message) throws IOException {
log.debug("Sending message from user '{}': {}", name, message);
session.sendMessage(new TextMessage(message.toString()));
}
public String getSessionId() {
return session.getId();
}
public void setWebRtcEndpoint(WebRtcEndpoint webRtcEndpoint) {
this.webRtcEndpoint = webRtcEndpoint;
if (this.webRtcEndpoint != null) {
for (IceCandidate e : candidateList) {
this.webRtcEndpoint.addIceCandidate(e);
}
this.candidateList.clear();
}
}
public void addCandidate(IceCandidate candidate) {
if (this.webRtcEndpoint != null) {
this.webRtcEndpoint.addIceCandidate(candidate);
} else {
candidateList.add(candidate);
}
if (this.playingWebRtcEndpoint != null) {
this.playingWebRtcEndpoint.addIceCandidate(candidate);
}
}
public void clear() {
this.webRtcEndpoint = null;
this.candidateList.clear();
}
}
I have two questions on this:
Why do we need session object?
What are the alternatives(if there are any) to manage session?
Let me give some more background on 2nd question. I found out that I can run the Kurento-JavaScript-Client(I need to convert it to browser version and then I can use it.) on the client side only (That way I won't require a backend server i.e. nodejs or tomcat - this is my assumption). So in this case how would I manage session or I can totally remove the UserRegistry concept and use some other way.
Thanks & Regards
You need to store sessions to implement signalling between the clients and the application server. See for example here. The signalling diagram describes the messages required to start/stop/etc the WebRTC video communication.
If you are planing to get rid of the application server (i.e. move to JavaScript client completely) you can take a look to a publish/subscribe API such as PubNub.

How to implement a Restlet JAX-RS handler which is a thin proxy to a RESTful API, possibly implemented in the same java process?

We have two RESTful APIs - one is internal and another one is public, the two being implemented by different jars. The public API sort of wraps the internal one, performing the following steps:
Do some work
Call internal API
Do some work
Return the response to the user
It may happen (though not necessarily) that the two jars run in the same Java process.
We are using Restlet with the JAX-RS extension.
Here is an example of a simple public API implementation, which just forwards to the internal API:
#PUT
#Path("abc")
public MyResult method1(#Context UriInfo uriInfo, InputStream body) throws Exception {
String url = uriInfo.getAbsolutePath().toString().replace("/api/", "/internalapi/");
RestletClientResponse<MyResult> reply = WebClient.put(url, body, MyResult.class);
RestletUtils.addResponseHeaders(reply.responseHeaders);
return reply.returnObject;
}
Where WebClient.put is:
public class WebClient {
public static <T> RestletClientResponse<T> put(String url, Object body, Class<T> returnType) throws Exception {
Response restletResponse = Response.getCurrent();
ClientResource resource = new ClientResource(url);
Representation reply = null;
try {
Client timeoutClient = new Client(Protocol.HTTP);
timeoutClient.setConnectTimeout(30000);
resource.setNext(timeoutClient);
reply = resource.put(body, MediaType.APPLICATION_JSON);
T result = new JacksonConverter().toObject(new JacksonRepresentation<T>(reply, returnType), returnType, resource);
Status status = resource.getStatus();
return new RestletClientResponse<T>(result, (Form)resource.getResponseAttributes().get(HeaderConstants.ATTRIBUTE_HEADERS), status);
} finally {
if (reply != null) {
reply.release();
}
resource.release();
Response.setCurrent(restletResponse);
}
}
}
and RestletClientResponse<T> is:
public class RestletClientResponse<T> {
public T returnObject = null;
public Form responseHeaders = null;
public Status status = null;
public RestletClientResponse(T returnObject, Form responseHeaders, Status status) {
this.returnObject = returnObject;
this.responseHeaders = responseHeaders;
this.status = status;
}
}
and RestletUtils.addResponseHeaders is:
public class RestletUtils {
public static void addResponseHeader(String key, Object value) {
Form responseHeaders = (Form)org.restlet.Response.getCurrent().getAttributes().get(HeaderConstants.ATTRIBUTE_HEADERS);
if (responseHeaders == null) {
responseHeaders = new Form();
org.restlet.Response.getCurrent().getAttributes().put(HeaderConstants.ATTRIBUTE_HEADERS, responseHeaders);
}
responseHeaders.add(key, value.toString());
}
public static void addResponseHeaders(Form responseHeaders) {
for (String headerKey : responseHeaders.getNames()) {
RestletUtils.addResponseHeader(headerKey, responseHeaders.getValues(headerKey));
}
}
}
The problem is that if the two jars run in the same Java process, then an exception thrown from the internal API is not routed to the JAX-RS exception mapper of the internal API - the exception propagates up to the public API and is translated to the Internal Server Error (500).
Which means I am doing it wrong. So, my question is how do I invoke the internal RESTful API from within the public API implementation given the constraint that both the client and the server may run in the same Java process.
Surely, there are other problems, but I have a feeling that fixing the one I have just described is going to fix others as well.
The problem has nothing to do with the fact that both internal and public JARs are in the same JVM. They are perfectly separated by WebResource.put() method, which creates a new HTTP session. So, an exception in the internal API doesn't propagate to the public API.
The internal server error in the public API is caused by the post-processing mechanism, which interprets the output of the internal API and crashes for some reason. Don't blame the internal API, it is perfectly isolated and can't cause any troubles (even though it's in the same JVM).

WCF closing best practice

I read that the best practice for using WCF proxy would be:
YourClientProxy clientProxy = new YourClientProxy();
try
{
.. use your service
clientProxy.Close();
}
catch(FaultException)
{
clientProxy.Abort();
}
catch(CommunicationException)
{
clientProxy.Abort();
}
catch (TimeoutException)
{
clientProxy.Abort();
}
My problem is, after I allocate my proxy, I assign event handlers to it and also initialize other method using the proxy:
public void InitProxy()
{
sdksvc = new SdkServiceClient();
sdksvc.InitClusteringObjectCompleted += new EventHandler<InitClusteringObjectCompletedEventArgs>(sdksvc_InitClusteringObjectCompleted);
sdksvc.InitClusteringObjectAsync(Utils.DSN, Utils.USER,Utils.PASSWORD);
sdksvc.DoClusteringCompleted += new EventHandler<DoClusteringCompletedEventArgs>(sdksvc_DoClusteringCompleted);
sdksvc.CreateTablesCompleted += new EventHandler<CreateTablesCompletedEventArgs>(sdksvc_CreateTablesCompleted);
}
I now need to call the InitProxy() method each Time I use the proxy if I want to use it as best practice suggests.
Any ideas on how to avoid this?
There are several options. One option is to write a helper class as follows:
public class SvcClient : IDisposable {
public SvcClient(ICommunicationObject service) {
if( service == null ) {
throw ArgumentNullException("service");
}
_service = service;
// Add your event handlers here, e.g. using your example:
sdksvc = new SdkServiceClient();
sdksvc.InitClusteringObjectCompleted += new EventHandler<InitClusteringObjectCompletedEventArgs>(sdksvc_InitClusteringObjectCompleted);
sdksvc.InitClusteringObjectAsync(Utils.DSN, Utils.USER,Utils.PASSWORD);
sdksvc.DoClusteringCompleted += new EventHandler<DoClusteringCompletedEventArgs>(sdksvc_DoClusteringCompleted);
sdksvc.CreateTablesCompleted += new EventHandler<CreateTablesCompletedEventArgs>(sdksvc_CreateTablesCompleted);
}
public void Dispose() {
try {
if( _service.State == CommunicationState.Faulted ) {
_service.Abort();
}
}
finally {
_service.Close();
}
}
private readonly ICommunicationObject _service;
}
To use this class write the following:
var clientProxy = new YourClientProxy();
using(new SvcClient(clientProxy)) {
// use clientProxy as usual. No need to call Abort() and/or Close() here.
}
When the constructor for SvcClient is called it then sets up the SdkServiceClient instance as desired. Furthermore the SvcClient class cleans up the service client proxy as well aborting and/or closing the connection as needed regardless of how the control flow leaves the using-block.
I don't see how the ClientProxy and the InitProxy() are linked but if they are linked this strong I'd move the initialization of the ClientProxy to the InitProxy (or make a method that initializes both) so you can control both their lifespans from there.