Restlet Java Framework - does Client need to be closed in some way? - restlet

I've just inherited some code that uses the Restlet framework - must admit I'd never even heard of it until now.
The pattern seems to be to do:
Request request = new Request(method, uri);
Client client = new Client(protocols);
Response response = client.handle(request);
but when I was debugging the code, I noticed quite a few daemon threads running. Is that normal? Should the code be closing the Client or something similar?
Thanks,
Paul

You should call client.stop() when you are done.

Related

Serilog using EnrichDiagnosticContext with additional properties not being logged in SignalR Hub

I have recently implemented Serilog logging into my ASP.NET Core/.NET5 web app that uses SignalR. I'm using the Elasticsearch sink and everything is largely working as expected. I decided to add some additional HttpContext properties to be logged on each request, so I went down the road of extending the call to UseSerilogRequestLogging() in StartUp.cs as to enrich the diagnostic context with a couple of extra properties (mainly because this seemed like the simplest way to do it):
app.UseSerilogRequestLogging(options =>
{
options.EnrichDiagnosticContext = (diagnosticContext, httpContext) =>
{
diagnosticContext.Set("HttpRequestClientIP", httpContext.Connection.RemoteIpAddress);
diagnosticContext.Set("UserName", httpContext.User?.Identity?.Name == null ? "(anonymous)" : httpContext.User.Identity.Name);
};
});
At first, this seemed to work as expected until I noticed it wasn't always working. I really want the extra properties logged on all log records written, and it seems to work fine on log records that are written automatically by Serilog when typical HTTP GETs, HTTP POSTs, etc. occur... But in my Signalr Hub class, I have a couple of places where I'm manually writing my own log records like Logger.Log(LogLevel.Information, "whatever.."), but these extra properties are simply not there on these records.
What am I missing here? Is it something about this being in a Signalr Hub that makes them unavailable? Or perhaps there's something I'm doing wrong with my Logger.Log() calls?
Any ideas would be appreciated.
Thanks-
It's not gonna to work with signalR.
Behind the screen, app.UseSerilogRequestLogging make use of a middleware in the request pipeline, that call RequestLoggingMiddleware as what you can see in detail here.
SignalR use the first Http request to setting to connection up to websocket, which won't goes through the pipeline at all. Therefore, doesn't have anything to do with RequestLoggingMiddleware, which you are using to logging out the request.
I finally ended up going with a couple of custom Enrichers. I did experiment briefly with middleware vs enrichers and they both seem to work as expected. Both always added the additional properties to all log entries. I'm still not quite sure I understand why the DiagnosticContext option behaves the way it does, unless it is simply due to the logging in question being in a SignalR hub as #Gordon Khanh Ng. posted. If that were the root of the problem though, you wouldn't think the enrichers or middleware would work either.

How to implement ALLO command on Apache FTP?

We have an embedded Apache FTP server running in a gateway for several years. It always worked without problems.
But now a customer is trying to connect with a device of a brand that we've never had before, and contrary to all other clients so far, that thing sends the ALLO command in advance to make sure the server has enough space.
But Apache FTP doesn't seem to know that command. the trace log states:
RECEIVED: ALLO 77482
SENT: 502 Command ALLO not implemented.
following which the client cuts the connection.
The command is also not present in the Apache documentation:
https://mina.apache.org/ftpserver-project/ftpserver_commands.html
So the question is, can I plug my own implementation into the server somehow?
Just to be clear, I'm not asking how to implement the functionality. Just how I can pass my own implementation to Apache FTP for use. If that is possible without touching the source code.
Since the application in question has been running very stable for a long time, I would really hate to tear the Apache FTP server out of there and embed another one...
Well, that was surprisingly simple once I dug myself through to the right code.
The implementation of a command is simple enough, in this case I've just started with a stub for testing:
class ALLO : AbstractCommand() {
override fun execute(session: FtpIoSession, context: FtpServerContext, request: FtpRequest) {
session.write(LocalizedFtpReply.translate(session, request, context,
FtpReply.REPLY_200_COMMAND_OKAY, "ALLO", "bring it!"))
}
}
Inherit AbstractCommand, override execute and write a response to the session.
The question is of course then how to make the server aware of the implementation, which also turns out to be really simple, although there sure as hell doesn't seem to be any documentation around. But you can just instantiate a CommandFactoryFactory, map your implementation, build the CommandFactory and set it in the FtpServerFactory:
val commandFactoryFactory = CommandFactoryFactory()
commandFactoryFactory.addCommand("ALLO", ALLO())
serverFactory.commandFactory = commandFactoryFactory.createCommandFactory()

What to use so Geode native client pool doesn't hang if no locator found

If I turn off my Geode server and server locator, and then try and connect a client using:
PoolFactory poolFactory = PoolManager
.CreateFactory()
.SetSubscriptionEnabled(true)
.AddLocator(host, port);
if (PoolManager.Find("MyPool") == null)
p = poolFactory.Create("MyPool");
then the poolFactory.Create("MyPool") instruction simply hangs. What do I use to return the Create in this situation of no connection?
It ought to be something like DEFAULT_SOCKET_CONNECT_TIMEOUT in the Javadoc but that doesn't exist in the C# native client...
.SetFreeConnectionTimeout doesn't return either.
Thanks
I don't believe PoolFactory::Create makes any synchronous connections, so I can't explain why it hangs. As this issue would require more back and forth you should post your question on the users#geode.apache.org mailing list.

How to simulate an uncompleted Netty ChannelFuture

I'm using Netty to write a client application that sends UDP messages to a server. In short I'm using this piece of code to write the stream to the channel:
ChannelFuture future = channel.write(request, remoteInetSocketAddress);
future.awaitUninterruptibly(timeout);
if(!future.isDone()){
//abort logic
}
Everything works fine, but one thing: I'm unable to test the abort logic as I cannot make the write to fail - i.e. even if the server is down the future would be completed successfully. The write operation usually takes about 1 ms so setting very little timeout doesn't help too much.
I know the preffered way would be to use an asynch model instead of await() call, however for my scenario I need it to be synchronous and I need to be sure it get finnished at some point.
Does anyone know how could I simulate an uncompleted future?
Many thanks in advance!
MM
Depending on how your code is written you could use a mock framework such as mockito. If that is not possible, you can also use a "connected" UDP socket, i.e. a datagram socket that is bound to a local address. If you send to a bogus server you should get PortunreachableException or something similar.
Netty has a class FailedFuture that can be used for the purpose of this,
You can for example mock your class with tests that simulate the following:
ChannelFuture future;
if(ALWAYS_FAIL) {
future = channel.newFailedFuture(new Exception("I failed"));
else
future = channel.write(request, remoteInetSocketAddress);

Identify WCF clients that do not dispose properly

we have a WCF service hosted inside IIS. Now there are loads of different client applications calling this service. WS-SecureConversion is used.
Now, the service diagnostic log shows warnings that security sessions are being aborted. Most likely this is because of clients that do not properly close the session.
More info: the problem were "pending" security sessions. Those are sessions that were never used, only opened. This is pretty annoying as you can have a maximum of 128 such pending sessions before your services starts barfing 500s.
This can be easily reproduced (see answer below). I was able to query 128 SessionInitiationMessageHandlers using WinDbg. So this might be a good measure to identify this scenario.
Still, a way to identify those "misbehaving" clients would be useful.
Regards,
Alex
Since client and server share nothing but messages going between them, there's not much you can really do.
On the server side, you could look at some bits of information being sent from the client - check out the OperationContext.Current property in your service method - see the MSDN documentation on OperationContext about details what exactly is provided.
So you might be able to log certain information to identify the "offending" clients.
Marc
Sweet....the best way to kill a WCF service with a secure conversion seems to be to do nothing.
ServicePointManager.ServerCertificateValidationCallback += delegate { return true; };
var client = new MyClient();
client.ClientCredentials.UserName.UserName = "user";
client.ClientCredentials.UserName.Password = "password";
while(true)
{
Console.WriteLine("OPEN");
var c = client.ChannelFactory.CreateChannel();
((ICommunicationObject)c).Open();
// If I comment the following line, the service throws a
// "Too busy, too many pending sessions" 500.
var request = new MyRequest { };
c.Do(request);
// Of course I did *not* comment this line
((ICommunicationObject)c).Close();
}
Meanwhile, this bug has been confirmed by MS but still remains in .NET 4.x even if MS says otherwise:
http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=499859