When writing data to a web server, my tests show HttpWebRequest.ReadWriteTimeout is ignored, contrary to the MSDN spec. For example if I set ReadWriteTimeout to 1 (=1 msec), call myRequestStream.Write() passing in a buffer that takes 10 seconds to transfer, it transfers successfully and never times out using .NET 3.5 SP1. The same test running on Mono 2.6 times out immediately as expected. What could be wrong?
There appears to be a bug where the write timeout, when set on the Stream instance returned to you by BeginGetRequestStream(), is not propagated down to the native socket. I will be filing a bug to make sure this issue is corrected for a future release of the .NET Framework.
Here is a workaround.
private static void SetRequestStreamWriteTimeout(Stream requestStream, int timeout)
{
// Work around a framework bug where the request stream write timeout doesn't make it
// to the socket. The "m_Chunked" field indicates we are performing chunked reads. Since
// this stream is being used for writes, the value of this field is irrelevant except
// that setting it to true causes the Eof property on the ConnectStream object to evaluate
// to false. The code responsible for setting the socket option short-circuits when it
// sees Eof is true, and does not set the flag. If Eof is false, the write timeout
// propagates to the native socket correctly.
if (!s_requestStreamWriteTimeoutWorkaroundFailed)
{
try
{
Type connectStreamType = requestStream.GetType();
FieldInfo fieldInfo = connectStreamType.GetField("m_Chunked", BindingFlags.NonPublic | BindingFlags.Instance);
fieldInfo.SetValue(requestStream, true);
}
catch (Exception)
{
s_requestStreamWriteTimeoutWorkaroundFailed = true;
}
}
requestStream.WriteTimeout = timeout;
}
private static bool s_requestStreamWriteTimeoutWorkaroundFailed;
Related
Problem Statement
Context
I'm a Software Engineer in Test running order permutations of Restaurant Menu Items to confirm that they succeed order placement w/ the POS
In short, this POSTs a JSON payload to an endpoint which then validates the order w/ a POS to define success/fail/other
Where POS, and therefore Transactions per Second (TPS), may vary, but each Back End uses the same core handling
This can be as high as ~22,000 permutations per item, in easily manageable JSON size, that need to be handled as quickly as possible
The Network can vary wildly depending upon the Restaurant, and/or Region, one is testing
E.g. where some have a much higher latency than others
Therefore, the HTTPClient should be able to intelligently negotiate the same content & endpoint regardless of this
Direct Problem
I'm using Apache's HTTP Client 5 w/ PoolingAsyncClientConnectionManager to execute both the GET for the Menu contents, and the POST to check if the order succeeds
This works out of the box, but sometimes loses connections w/ Stream Refused, specifically:
org.apache.hc.core5.http2.H2StreamResetException: Stream refused
No individual tuning seems to work across all network contexts w/ variable latency, that I can find
Following the stacktrace seems to indicate it is that the stream had closed already, therefore needs a way to keep it open or not execute an already-closed connection
if (connState == ConnectionHandshake.GRACEFUL_SHUTDOWN) {
throw new H2StreamResetException(H2Error.PROTOCOL_ERROR, "Stream refused");
}
Some Attempts to Fix Problem
Tried to use Search Engines to find answers but there are few hits for HTTPClient5
Tried to use official documentation but this is sparse
Changing max connections per route to a reduced number, shifting inactivity validations, or connection time to live
Where the inactivity checks may fix the POST, but stall the GET for some transactions
And that tuning for one region/restaurant may work for 1 then break for another, w/ only the Network as variable
PoolingAsyncClientConnectionManagerBuilder builder = PoolingAsyncClientConnectionManagerBuilder
.create()
.setTlsStrategy(getTlsStrategy())
.setMaxConnPerRoute(12)
.setMaxConnTotal(12)
.setValidateAfterInactivity(TimeValue.ofMilliseconds(1000))
.setConnectionTimeToLive(TimeValue.ofMinutes(2))
.build();
Shifting to a custom RequestConfig w/ different timeouts
private HttpClientContext getHttpClientContext() {
RequestConfig requestConfig = RequestConfig.custom()
.setConnectTimeout(Timeout.of(10, TimeUnit.SECONDS))
.setResponseTimeout(Timeout.of(10, TimeUnit.SECONDS))
.build();
HttpClientContext httpContext = HttpClientContext.create();
httpContext.setRequestConfig(requestConfig);
return httpContext;
}
Initial Code Segments for Analysis
(In addition to the above segments w/ change attempts)
Wrapper handling to init and get response
public SimpleHttpResponse getFullResponse(String url, PoolingAsyncClientConnectionManager manager, SimpleHttpRequest req) {
try (CloseableHttpAsyncClient httpclient = getHTTPClientInstance(manager)) {
httpclient.start();
CountDownLatch latch = new CountDownLatch(1);
long startTime = System.currentTimeMillis();
Future<SimpleHttpResponse> future = getHTTPResponse(url, httpclient, latch, startTime, req);
latch.await();
return future.get();
} catch (IOException | InterruptedException | ExecutionException e) {
e.printStackTrace();
return new SimpleHttpResponse(999, CommonUtils.getExceptionAsMap(e).toString());
}
}
With actual handler and probing code
private Future<SimpleHttpResponse> getHTTPResponse(String url, CloseableHttpAsyncClient httpclient, CountDownLatch latch, long startTime, SimpleHttpRequest req) {
return httpclient.execute(req, getHttpContext(), new FutureCallback<SimpleHttpResponse>() {
#Override
public void completed(SimpleHttpResponse response) {
latch.countDown();
logger.info("[{}][{}ms] - {}", response.getCode(), getTotalTime(startTime), url);
}
#Override
public void failed(Exception e) {
latch.countDown();
logger.error("[{}ms] - {} - {}", getTotalTime(startTime), url, e);
}
#Override
public void cancelled() {
latch.countDown();
logger.error("[{}ms] - request cancelled for {}", getTotalTime(startTime), url);
}
});
}
Direct Question
Is there a way to configure the client such that it can handle for these variances on its own without explicitly modifying the configuration for each endpoint context?
Fixed w/ Combination of the below to Assure Connection Live/Ready
(Or at least is stable)
Forcing HTTP 1
HttpAsyncClients.custom()
.setConnectionManager(manager)
.setRetryStrategy(getRetryStrategy())
.setVersionPolicy(HttpVersionPolicy.FORCE_HTTP_1)
.setConnectionManagerShared(true);
Setting Effective Headers for POST
Specifically the close header
req.setHeader("Connection", "close, TE");
Note: Inactivity check helps, but still sometimes gets refusals w/o this
Setting Inactivity Checks by Type
Set POSTs to validate immediately after inactivity
Note: Using 1000 for both caused a high drop rate for some systems
PoolingAsyncClientConnectionManagerBuilder
.create()
.setValidateAfterInactivity(TimeValue.ofMilliseconds(0))
Set GET to validate after 1s
PoolingAsyncClientConnectionManagerBuilder
.create()
.setValidateAfterInactivity(TimeValue.ofMilliseconds(1000))
Given the Error Context
Tracing the connection problem in stacktrace to AbstractH2StreamMultiplexer
Shows ConnectionHandshake.GRACEFUL_SHUTDOWN as triggering the stream refusal
if (connState == ConnectionHandshake.GRACEFUL_SHUTDOWN) {
throw new H2StreamResetException(H2Error.PROTOCOL_ERROR, "Stream refused");
}
Which corresponds to
connState = streamMap.isEmpty() ? ConnectionHandshake.SHUTDOWN : ConnectionHandshake.GRACEFUL_SHUTDOWN;
Reasoning
If I'm understanding correctly:
The connections were being un/intentionally closed
However, they were not being confirmed ready before executing again
Which caused it to fail because the stream was not viable
Therefore the fix works because (it seems)
Given Forcing HTTP1 allows for a single context to manage
Where HttpVersionPolicy NEGOTIATE/FORCE_HTTP_2 had greater or equivalent failures across the spectrum of regions/menus
And it assures that all connections are valid before use
And POSTs are always closed due to the close header, which is unavailable to HTTP2
Therefore
GET is checked for validity w/ reasonable periodicity
POST is checked every time, and since it is forcibly closed, it is re-acquired before execution
Which leaves no room for unexpected closures
And otherwise the potential that it was incorrectly switching to HTTP2
Will accept this until a better answer comes along, as this is stable but sub-optimal.
Given
I am using XSockets 3.0.6 which I think is the latest stable version. Under MS.NET the behavior is as expected. On Ubuntu 14.04 and Mono 3.6.1 though the server has some kind of delays before sending messages to clients.
Problem
On MS.NET when I type a string in the client and send it, all clients are immediately notified. On Mono though message is received by the server and clients were not notified immediately. With only 1 message I waited for 5 minutes and clients were still not notified. When messages become 5-6 then all clients become notified about all messages at once. It seems like the server uses some kind of buffering but conditionally - depending on the .NET runtime, which is very strange.
Question
Am I doing something wrong? How to change the code so that all clients are immediately notified as in MS.NET?
Code
I followed (and slightly modified) the quick start example as follows...
Server
Initialization
using (var container = Composable.GetExport<IXSocketServerContainer>())
{
container.StartServers();
foreach (var server in container.Servers)
{
Console.WriteLine(server.ConfigurationSetting.Endpoint);
}
Console.Write("Started! Hit 'Enter' to quit.");
Console.ReadLine();
container.StopServers();
}
Custom controller
public class CustomController : XSocketController
{
public override void OnMessage(ITextArgs textArgs)
{
Console.WriteLine ("No delay = {0}", this.Socket.Socket.NoDelay);
if (!this.Socket.Socket.NoDelay)
{
Socket.Socket.NoDelay = true;
}
Console.WriteLine("Received {0} about {1}.", textArgs.data, textArgs.#event);
this.SendToAll(textArgs);
}
}
Client
var client = new XSocketClient("ws://127.0.0.1:4502/CustomController", "*");
client.OnOpen += (sender, eventArgs) => System.Console.WriteLine("OPEN");
client.Bind("foo", message => System.Console.WriteLine(message.data));
Thread.Sleep(1000);
client.Open();
string input;
System.Console.WriteLine("Type 'quit' to quit and any other string to send a message:");
do
{
input = System.Console.ReadLine();
if (input != "quit")
{
client.Send(input, "foo");
}
} while (input != "quit");
I experienced this my self when running XSockets on a raspberry pi.
After some investigation I realized that it had to do with the fact that the pi is single core and that internal queue did not send the message out until 5 messages was sent in.... Then all messages was sent out.
How many cores does your computer have ?
This issue is resolved in 4.0 (in alpha right now)
Edit: I have only had this issue on single core machines with Mono, on my Mac Book Air everything works great on Mono
It looks that the Naggle Algorithm is not disabled in XSockets. In System.Net.Sockets you can disable Naggle algorithm by setting Socket.NoDelay property to true.
I'm not familiar with XSockets, but if you can get underlying System.Net.Sockets.Socket class from XSockets, you can set this property to true and avoid the sending delay.
We have implemented the following channelIdle implementation.
public void channelIdle(ChannelHandlerContext ctx, IdleStateEvent e)
throws Exception {
Response response = business.getResponse();
final Channel channel = e.getChannel();
ChannelFuture channelFuture
= Channels.write(
channel,
ChannelBuffers.wrappedBuffer(response.getXML().getBytes())
);
if (response.shouldDisconnect()) {// returns true and listener _is_ added.
channelFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
channel.close(); // never gets called :(
}
});
}
}
When running in non-SSL mode this works as expected.
However, when running with SSL enabled the operationComplete method never gets called. We've verified this a few times on various machines. The idle timeout happens many times but the operationComplete isn't called. We don't see any exceptions being thrown.
I've tried tracing through the code to see where operationComplete should get called but it is complex and I've not quite figured it out.
There is a call to future = succeededFuture(channel); in SslHandler.wrap() but I don't know if that means anything. The future returned from wrap is never used elsewhere in the SslHandler code.
This sounds like a bug.. Would it be possible to write a simple "test-case" that shows the problem and open an issue at our github issue tracker[1].
Be sure to explain if it happens all the time or only sometimes etc..
[1] https://github.com/netty/netty/issues
We're using WCF to build a simple web service which our product uses to upload large files over a WAN link. It's supposed to be a simple HTTP PUT, and it's working fine for the most part.
Here's a simplified version of the service contract:
[ServiceContract, XmlSerializerFormat]
public interface IReplicationWebService
{
[OperationContract]
[WebInvoke(Method = "PUT", UriTemplate = "agents/{sourceName}/epoch/{guid}/{number}/{type}")]
ReplayResult PutEpochFile(string sourceName, string guid, string number, string type, Stream stream);
}
In the implementation of this contract, we read data from stream and write it out to a file. This works great, so we added some error handling for cases when there's not enough disk space to store the file. Here's roughly what it looks like:
public ReplayResult PutEpochFile(string sourceName, string guid, string number, string type, Stream inStream)
{
//Stuff snipped
try
{
//Read from the stream and write to the file
}
catch (IOException ioe)
{
//IOException may mean no disk space
try
{
inStream.Close();
}
// if instream caused the IOException, close may throw
catch
{
}
_logger.Debug(ioe.ToString());
throw new FaultException<IOException>(ioe, new FaultReason(ioe.Message), new FaultCode("IO"));
}
}
To test this, I'm sending a 100GB file to a server that doesn't have enough space for the file. As expected this throws an exception, but the call to inStream.Close() appeared to hang. I checked into it, and what's actually happening is that the call to Close() made its way through the WCF plumbing until it reached System.ServiceModel.Channels.DrainOnCloseStream.Close(), which according to Reflector allocates a Byte[] buffer and keeps reading from the stream until it's at EOF.
In other words, the Close call is reading the entire 100GB of test data from the stream before returning!
Now it may be that I don't need to call Close() on this stream. If that's the case I'd like an explanation as to why. But more importantly, I'd appreciate it if anyone could explain to me why Close() is behaving this way, why it's not considered a bug, and how to reconfigure WCF so that doesn't happen.
.Close() is intended to be a "safe" and "friendly" way of stopping your operation - and it will indeed complete the currently running requests before shutting down - by design.
If you want to throw down the sledgehammer, use .Abort() on your client proxy (or service host) instead. That just shuts down everything without checking and without being nice about waiting for operations to complete.
In classic ASP.NET I’d persist data extracted from a web service in base class property as follows:
private string m_stringData;
public string _stringData
{ get {
if (m_stringData==null)
{
//fetch data from my web service
m_stringData = ws.FetchData()
}
return m_stringData;
}
}
This way I could simply make reference to _stringData and know that I’d always get the data I was after (maybe sometimes I’d use Session state as a store instead of a private member variable).
In Silverlight with a WCF I might choose to use Isolated Storage as my persistance mechanism, but the service call can't be done like this, because a WCF service has to be called asynchronously.
How can I both invoke the service call and retrieve the response in one method?
Thanks,
Mark
In your method, invoke the service call asynchronously and register a callback that sets a flag. After you have invoked the method, enter a busy/wait loop checking the flag periodically until the flag is set indicating that the data has been returned. The callback should set the backing field for your method and you should be able to return it as soon as you detect the flag has been set indicating success. You'll also need to be concerned about failure. If it's possible to get multiple calls to your method from different threads, you'll also need to use some locking to make your code thread-safe.
EDIT
Actually, the busy/wait loop is probably not the way to go if the web service supports BeginGetData/EndGetData semantics. I had a look at some of my code where I do something similar and I use WaitOne to simply wait on the async result and then retrieve it. If your web service doesn't support this then throw a Thread.Sleep -- say for 50-100ms -- in your wait loop to give time for other processes to execute.
Example from my code:
IAsyncResult asyncResult = null;
try
{
asyncResult = _webService.BeginGetData( searchCriteria, null, null );
if (asyncResult.AsyncWaitHandle.WaitOne( _timeOut, false ))
{
result = _webService.EndGetData( asyncResult );
}
}
catch (WebException e)
{
...log the error, clean up...
}
Thanks for your help tvanfosson. I followed your code and have also found a pseudo similar solution that meets my needs exactly using a lambda expression:
private string m_stringData;
public string _stringData{
get
{
//if we don't have a list of departments, fetch from WCF
if (m_stringData == null)
{
StringServiceClient client = new StringServiceClient();
client.GetStringCompleted +=
(sender, e) =>
{
m_stringData = e.Result;
};
client.GetStringAsync();
}
return m_stringData;
}
}
EDIT
Oops... actually this doesn't work either :-(
I ended up making the calls Asynchronously and altering my programming logic to use MVVM pattern and more binding.