I have a wcf service and when we make like 40 simulatenosly calls to the service and we start seeing a lot of timeous on the client. Please see attached picture of wcf trace log , sorted by duration. The service and client are on same machine, using standard tcp binding.
Error we see are
1) The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '10675199.02:48:05.4775807'.
2) System.IO.PipeException, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
There was an error writing to the pipe: The pipe is being closed. (232, 0xe8).
Update on further testing...
Looks like this happens only when the service does some work , if I comment out the work I am doing in the service it takes same 1000 / in 3min requests with no problems.
The client has like 40 active connection to the server.
Even with a sample app , i see same errors. I have do following in server ..
public double Add(double n1, double n2) {
for (int i = 0; i < 1000000; i++)
{
Trace.WriteLine("this is a test");
}
return n1 + n2;
}
and do the following on the client
Action<string> a = CallService5000TimesInLoop;
for (int i = 0; i < 30; i++)
{
a.BeginInvoke("Url",null,null);
}
Related
I'm using AWS Application Load Balancer (ALB) to expose the ASP.NET Core gRPC services. The services are running in Fargate containers and expose unsecured HTTP ports. ALB terminates the outer TLS connection and forwards the unencrypted traffic to a target group based on the route. The gRPC application has several client streaming endpoints and the client can pause the streaming for several minutes. I know that there are HTTP2 PING frames, which can be used in such cases, to keep alive the connection that has no data transmission for some amount of time.
The gRPC server is configured to send HTTP2 pings every 20 seconds for keeping the connection alive. I tested this approach and it works, the ping frames went from the server and were acknowledged by the client.
But this approach fails when it comes to ALB. During the transmission pauses, I don't see any packages from the server behind the load balancer (I use Wireshark). Then after the timeout of 1 minute, the ALB resets the connection.
I tried to use client-sent HTTP2 pings as well. But the connection also resets in 1 minute and I have no evidence whether these ping packages actually reached the server behind the ALB.
I have an assumption that AWS ALB doesn't allow such packets to pass over it, but I didn't find any documentation that proves it.
ALB forwards requests based on HTTP protocol semantics, and not raw HTTP/2 frames. Therefore something like ping frames will only apply for one of the hops.
If you want an end to end ping, you could define a gRPC API which is performing the ping. For server to client you would be required to use a server side streaming APIs. But it might actually be preferrable to let the clients start the pings, to reduce the worker the server has to perform.
The AWS support team responded to my ticket and the short answer is ALB does not support the HTTP2 ping frames. They suggested increasing the value of idle timeout on the load balancer, but this solution may be not applicable in some cases.
As Matthias247 already mentioned, the possible workaround is to define a gRPC API for the purpose of doing a ping.
Since ALB does not support the HTTP2 ping frames. A straightforward way to solve it is to use a custom PING message.
I think you can get another new stream to send messages when the current stream is closed by ALB due to idle timeout (without messages within idle time)
When idle timeout of ALB, the RST_STREAM message with ErrCode=PROTOCOL_ERROR will be sent from ALB to client-side. The client could handle this error in sender and receiver and then get another new stream to send new messages to reuse the http2 connection.
Here are the sample codes with gRPC-go
conn, errD := grpc.Dial(ServerAddress,
grpc.WithTransportCredentials(cred),
grpc.WithConnectParams(grpc.ConnectParams{MinConnectTimeout: time.Duration(63) * time.Second}),
grpc.WithKeepaliveParams(keepalive.ClientParameters{
Time: time.Second * 20,
Timeout: time.Second * 3,
PermitWithoutStream: true,
}))
if errD != nil {
log.Fatalf("net.Connect err: %v", errD)
}
defer conn.Close()
grpcClient := protocol.NewChatClient(conn)
ctx := context.Background()
stream, errS := grpcClient.Stream(ctx, grpc.WaitForReady(true))
if errS != nil {
log.Fatalf("get BidirectionalHello stream err: %v", errS)
}
for i := 0; i < 200; i++ {
err := stream.Send(
// some message
})
if err != nil {
if err == io.EOF {
// get anothe stream to send new message on sender
stream, errS = grpcClient.Stream(ctx, grpc.WaitForReady(true))
if errS != nil {
log.Fatalf("get stream err: %v", errS)
}
} else if s, ok := status.FromError(err); ok {
switch s.Code() {
case codes.OK:
// noop
case codes.Unavailable, codes.Canceled, codes.DeadlineExceeded:
return
default:
return
}
}
}
go func() {
for {
res, errR := stream.Recv()
if errR != nil {
if errR == io.EOF {
log.Printf("stream recv err %+v \n", errR)
}
// get anothe stream to send new message on receiver
stream, errS = grpcClient.Stream(ctx, grpc.WaitForReady(true))
if errS != nil {
log.Fatalf("in recv to get stream err: %v", errS)
}
return
}
log.Printf("recv resp %+v", res)
}
}()
// over the idle timeout of alb (60 seconds)
time.Sleep(time.Duration(61) * time.Second)
}
To view the details of gRPC message, you could run it through GODEBUG=http2debug=2 go run main.go
I'm trying to write some code to retrieve the e-mails stored in the pop3 mailbox hosted at a third party email provider using Mailkit (version 2.1.0.3). When attempting to connect to the mailserver, the connection fails every other time with the error:
"An error occurred while attempting to establish an SSL or TLS
connection."
With inner exception
"Authentication failed because the remote party has closed the transport stream."
So the code above on the first attempt succeeds, and on the second attempt fails with the error mentioned above. This happens without fail, and always fails the second time I try. This leads me to believe there is something wrong with terminating the connection.
Here is the code I use to set up a connection.
using (var client = new Pop3Client())
{
await client.ConnectAsync("pop3.**.nl", 995, true); // FAILS HERE
await client.AuthenticateAsync("**username**", "**password**");
for (int i = 0; i < client.Count; i++)
{
...
}
await client.DeleteAllMessagesAsync();
await client.DisconnectAsync(true);
}
I already attempting resolving the issue using the following functions, however, none of them helped. However changing the SSL protocol to version SSL3 and SSL2 caused the error to appear at every connection attempt, instead of every other one.
client.ServerCertificateValidationCallback += (e, r, t, y) => true;
client.SslProtocols = SslProtocols.Tls12;
client.CheckCertificateRevocation = false;
I'm running some asynchronous GET requests using a proxy with authentication. When doing HTTPS requests, I'm always running into an exception after 2 successful asyncronous requests:
java.lang.IllegalArgumentException: Auth scheme may not be null
When executing the GET requests without a proxy, or using http instead of https, the exception never occurred.
Example from Apache HttpAsyncClient Examples
HttpHost proxy = new HttpHost("proxyname", 3128);
CredentialsProvider credsProvider = new BasicCredentialsProvider();
credsProvider.setCredentials(new AuthScope(proxy), new UsernamePasswordCredentials("proxyuser", "proxypass"));
CloseableHttpAsyncClient httpClient = HttpAsyncClients.custom().setDefaultCredentialsProvider(credsProvider).build();
httpClient.start();
RequestConfig config = RequestConfig.custom().setProxy(proxy).build();
for (int i = 0; i < 3; i++) {
HttpGet httpGet = new HttpGet(url);
httpGet.setConfig(config);
httpClient.execute(httpGet, new FutureCallback<HttpResponse>() {
public void failed(Exception ex) {
ex.printStackTrace(); // Exception occures here afther 2nd iteration
}
public void completed(HttpResponse result) {
// works for the first and second iteration
}
public void cancelled() {
}
});
}
httpClient.close();
If I run the code above with 'http://httpbin.org/get', there is no exception, but if I run it with 'https://httpbin.org/get', I get the following exception after 2 successful requests:
java.lang.IllegalArgumentException: Auth scheme may not be null
at org.apache.http.util.Args.notNull(Args.java:54)
at org.apache.http.impl.client.AuthenticationStrategyImpl.authSucceeded(AuthenticationStrategyImpl.java:215)
at org.apache.http.impl.client.ProxyAuthenticationStrategy.authSucceeded(ProxyAuthenticationStrategy.java:44)
at org.apache.http.impl.auth.HttpAuthenticator.isAuthenticationRequested(HttpAuthenticator.java:88)
at org.apache.http.impl.nio.client.MainClientExec.needAuthentication(MainClientExec.java:629)
at org.apache.http.impl.nio.client.MainClientExec.handleResponse(MainClientExec.java:569)
at org.apache.http.impl.nio.client.MainClientExec.responseReceived(MainClientExec.java:309)
at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseReceived(DefaultClientExchangeHandlerImpl.java:151)
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.responseReceived(HttpAsyncRequestExecutor.java:315)
at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:255)
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81)
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39)
at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:121)
at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591)
at java.lang.Thread.run(Thread.java:748)
Note: I'm using httpasyncclient 4.1.4
If this is the exact code you have been executing then the problem is quite apparent. Welcome to the world of even-driven programming.
Essentially what happens is the following:
The client initiates 3 message exchanges by submitting 3 requests to the client execution pipeline in a tight loop
3 message exchanges get queued up for execution
The loop exits
Client shutdown is initiated
Now the client is racing to execute 3 initiated message exchanges and to shut itself down at the same time
If one is lucky and the target server is fast enough one might get all 3 exchanges before the client shuts down its i/o event processing threads
If unlucky or when the request execution is relatively slow, for instance due, to the use of TLS transport security, some of message exchanges might get terminated in the middle of the process. This is the reason you are seeing the failure when using https scheme but not http.
I have a WCF Web service client program that will send over 200 concurrent requests through either WCF client proxy async calls generated by SvcUtil.exe to the service. I had presumed that CLR will honer the maxconnection=2 in the client config.
<system.net>
<defaultProxy enabled="true">
<proxy bypassonlocal="False" usesystemdefault="True" />
<bypasslist />
<module />
</defaultProxy>
<connectionManagement>
<add address="*" maxconnection="2" />
</connectionManagement>
I know the default value of maxconnection is 2 anyway, and my codes do NOT have System.Net.ServicePointManager.DefaultConnectionLimit defined. However, when I run Fiddler 2 to monitor the outbound requests, my program always sends 64 requests within 0.5 second concurrently, ignoring maxconnection.
Here's the codes:
namespace DemoClient
{
class Program
{
const string realWorldEndpoint = "DefaultBinding_RealWorld";
static void Main(string[] args)
{
var list = new int[100];
for (int i=0;i<100;i++)
{
list[i] = i;
}
var tasks = list.Select(d =>
{
return GetHardData(d);
});
Task.WaitAll(tasks.ToArray());
Console.WriteLine("All done.");
Console.ReadLine();
}
static async Task<string> GetHardData(int d)
{
using( RealWorldProxy client = new RealWorldProxy(realWorldEndpoint))
{
client.ClientCredentials.UserName.UserName = "test";
client.ClientCredentials.UserName.Password = "tttttttt";
return await client.GetHardDataAsync(d);
}
}
}
}
and the implementation of GetHardData in the service side is:
public string GetHardData(int value)
{
System.Threading.Thread.Sleep(40000);
return string.Format("You entered: {0}", value);
}
I understand my codes had spin off over 100 threads because of thread over-subscription intended, but I would expect them to be queued in the http connection pool which should allow only 2 outbound requests/connections by default.
This behavior may cause client timeout if the Web service is not responding fast enough to finish 64 requests each in 60 seconds, since by default the Web service hosted in IIS will limit 2 concurrent calls from the same client, so the other 62 requests have to wait, thus some of them may timeout.
Do I miss something?
Technically I may construct a thread queue or a custom TaskScheduler to limit the number of concurrent calls to 2, however, I would like to rely to the default settings of thread pool and http / tcp pools to queue.
Do I miss something? how to limit concurrent http requests of the WCF client according to maxconnection in config?
I have a load tester that calls my WCF service and I've built it with options to run the calls in parallel or not. Only when running in parallel, I get the following error for all threads: "The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error."
This is more or less my code:
if (runMultiThreaded)
{
ParallelOptions options = new ParallelOptions();
options.MaxDegreeOfParallelism = System.Environment.ProcessorCount;
ParallelLoopResult loopResult = Parallel.For(0, numberOfTimesToTest, options,
(i, loopState) =>
{
myService.MyOperation();
if (loopState.ShouldExitCurrentIteration) return;
});
}
else
{
for (int i = 0; i < test1NumberOfRuns; i++)
{
myService.MyOperation();
}
}
Any ideas? Let me know if you need more details.
UPDATE: myService is an instance of my service's operation contract interface that was created with a ChannelFactory using the CreateChannel method.
Thanks!
I'm assuming your myService is a ClientBase<T> subclass or a channel created explicitly via ChannelFactory<T>::CreateChannel? If so those instances are not guaranteed to be thread safe and you so you need an instance per worker thread.