org.restlet.Context timeout value - restlet

When org.restlet.Context is using Apache HttpClient to make the connection, how do I specify the timeout period the client should wait for the server to respond. Here is what I have so far
final Context context = new Context();
context.getParameters().add("socketTimeout", "10000");
context.getParameters().add("", "10000"); //?? connectionTimeout
ClientResource clientResource = new ClientResource(context, ss.getUrl());
Is there documentation for the possible parameter values and their meaning?

In fact, all supported parameters for the HttpClient extension are described in this page:
http://www.restlet.org/documentation/snapshot/jee/ext/org/restlet/ext/httpclient/HttpClientHelper.html
In your case, it seems that the idleTimeout parameter should bother.

Related

I am trying to create a Connection Pool with single connection using PoolingHttpClientConnectionManager and ClosableHttpClient for HTTPS and reuse it

Trying to create a connection pool, where only single HTTPS connection will be created and when subsequent request comes previous connection pool would be used. Currently when I am trying to hit any request each time new connection is getting Established and previous connection is going into time wait state.
Below code-snippet I am using, it's working for HTTP connection but not for HTTPS
SslConfigurator sslConfig = SslConfigurator.newInstance().keyStoreFile(this.connectionInfo.getKeyStorePath()).keyStorePassword(connectionInfo.getKeyStorePassword()).keyStoreType("JKS").trustStoreFile(this.connectionInfo.getKeyStorePath()).trustStorePassword(connectionInfo.getKeyStorePassword()).securityProtocol("TLS");
logger.info("SSL CONFIG Accepted");
sslContext = sslConfig.createSSLContext();
SSLConnectionSocketFactory sslsf = new SSLConnectionSocketFactory(sslContext,NoopHostnameVerifier.INSTANCE);
logger.info("SSL CONTEXT CREATED, Building Client" );
Registry<ConnectionSocketFactory> socketFactoryRegistry = RegistryBuilder.<ConnectionSocketFactory> create().register("https", sslsf).build();
connManager = new PoolingHttpClientConnectionManager(socketFactoryRegistry);
connManager.setMaxTotal(1);
connManager.setDefaultMaxPerRoute(1);
config = RequestConfig.custom().setConnectTimeout(60000).setConnectionRequestTimeout(60000).setSocketTimeout(60000).build();
client = HttpClients.custom().setDefaultRequestConfig(config).setConnectionManager(connManager).build();
connManager.setMaxTotal(1);
I am not sure why you think this has no effect because this should definitely restrict the total number of connections in the pool to just one at a time.
In your particular case however you should be using BasicHttpClientConnectionManager instead of PoolingHttpClientConnectionManager

restsharp accept-encoding disabling compression

In a particular case I need to be able to disable compression in the requst/response.
Using Firefox RestClient I am able to post some xml to a web service and get some response xml successfully with a single header parameter "Accept-Encoding" : " "
which if I do not set this header, the response body would come back compressed with some binary data in the response body(that's why I want to disable gzip in response)
Now using the same header value in my app (using RestSharp in C#), I still get the binary data (gzip) in response.
Can someone please shed some light? Is it supported in RestSharp?
RestSharp does not support disabling compression.
If you look at the source code in Http.Sync.cs line 267 (assuming a sync request, async has the same code duplicated in Http.Async.cs line 424)
webRequest.AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip | DecompressionMethods.None;
that is, the underlying WebRequest that Restsharp uses to make the http call has the compression options hardcoded. There is an open issue that documents this
The feature (only just) seems to have been added, but stealthily - without a note on the issue's status nor on the changelogs. Possibly as it hasn't been sufficiently tested?
Nevertheless I recently had a need for this functionality and tested it - and it works. Just set the RestClient instance's AutomaticDecompression property to false.
If you intend to keep your RestClient instance long-lived remember to do this before its first use - the setting seems to be 'locked in' after use and cannot change after. In my case I needed to make calls with and without AutomaticDecompression so i simply created two different RestClient instances.
Using RestSharp v106.11.4, I was unable to turn off automatic decompression as Bo Ngoh suggested. I set the AutomaticDecompression on the RestClient instance at the moment it gets instantiated, but still the Accept-Encoding header was added.
The way to set this & disable the decompression is through the ConfigureWebRequest method, which is exposed on the RestClient. Below snippet allowed me to turn off this feature:
var client = new RestClient();
client.ConfigureWebRequest(wr =>
{
wr.AutomaticDecompression = DecompressionMethods.None;
});
Not sure if this relevant anymore, but for maybe future references
RestRequest has IList<DecompressionMethods> AllowedDecompressionMethods, and when creating new RestRequest the list is empty. Only when calling the Execute method it fills with the default values (None, Deflate, and GZip) unless it's not empty
To update the wanted decompression method, simply use the method named AddDecompressionMethod and add the wanted decompression method - and that's that
Example:
var client = new RestClient();
var request = new RestRequest(URL, Method.GET, DataFormat.None);
request.AddDecompressionMethod(DecompressionMethods.GZip);
var response = client.Execute(request);
As of RestSharp version 107, the AddDecompressionMethod has been removed and most of the client options has been move to RestClientOptions. Posting here the solution that worked for me, in case anyone needs it.
var options = new RestClientOptions(url)
{
AutomaticDecompression = DecompressionMethods.None
};
_client = new RestClient(options);

cxf failover recovery

I have a cxf JAX-WS client. I added the failover strategy. The question is how the client can recovery from the backup solution and use again the primary URL? Because now after the client will switch to secondary URL remains there, will not use the primary URL even if this become available again.
The code for the client part is:
JaxWsProxyFactoryBean factory = new JaxWsProxyFactoryBean();
factory.setServiceClass(GatewayPort.class);
factory.setAddress(this.configFile.getPrimaryURL());
FailoverFeature feature = new FailoverFeature();
SequentialStrategy strategy = new SequentialStrategy();
List<String> addList = new ArrayList<String>();
addList.add(this.configFile.getSecondaryURL());
strategy.setAlternateAddresses(addList);
feature.setStrategy(strategy);
List<AbstractFeature> features = new ArrayList<AbstractFeature>();
features.add(feature);
factory.setFeatures(features);
this.serviceSoap = (GatewayPort)factory.create();
Client client = ClientProxy.getClient(this.serviceSoap);
if (client != null)
{
HTTPConduit conduit = (HTTPConduit)client.getConduit();
HTTPClientPolicy policy = new HTTPClientPolicy();
policy.setConnectionTimeout(this.configFile.getTimeout());
policy.setReceiveTimeout(this.configFile.getTimeout());
conduit.setClient(policy);
}
You may add the primary URL to the alternate addresses list instead of setting that to JaxWsProxyFactoryBean. This way, since you are using SequentialStrategy, the primary URL will be checked first for every service call, if it fails then secodary URL will be tried.
You might as well try an alternative CXF failover feture with failback.
https://github.com/jaceko/cxf-circuit-switcher

Calling third party secure webservice through WCF - authentication problems

I'm writing a client against a vendor's webservice, using WCF in Visual Studio 2010. I have no ability to change their implementation or configuration.
Running against an install on their test server, I had no problems. I added a service reference from their wsdl, set the url in code, and made the call:
var client = new TheirWebservicePortTypeClient();
client.Endpoint.Address = new System.ServiceModel.EndpointAddress(webServiceUrl);
if (webServiceUsername != "")
{
client.ClientCredentials.UserName.UserName = webServiceUsername;
client.ClientCredentials.UserName.Password = webServicePassword;
}
TheirWebserviceResponse response = client.TheirOperation(myRequest);
Simple and straightforward. Until they moved it to their production server and configured it to use https. Then I got this error:
The HTTP request is unauthorized with client authentication scheme 'Anonymous'. The authentication header received from the server was 'Basic realm='.
So I went looking for help. I found this: Can not call web service with basic authentication using wcf.
The approved answer suggested this:
BasicHttpBinding binding = new BasicHttpBinding();
binding.SendTimeout = TimeSpan.FromSeconds(25);
binding.Security.Mode = BasicHttpSecurityMode.Transport;
binding.Security.Transport.ClientCredentialType =
HttpClientCredentialType.Basic;
EndpointAddress address = new EndpointAddress(your-url-here);
ChannelFactory<MyService> factory =
new ChannelFactory<MyService>(binding, address);
MyService proxy = factory.CreateChannel();
proxy.ClientCredentials.UserName.UserName = "username";
proxy.ClientCredentials.UserName.Password = "password";
Which also seemed simple enough. Except for my trying to figure out which of the multitude of classes and interfaces that were generated from the wsdl to make the service reference I should use in place of the "MyService", above.
My first try was to use "TheirWebservicePortTypeClient" - the class I had instantiated in the previous version. That gave me a runtime error:
The type argument passed to the generic ChannelFactory class must be an interface type.
So I dug into the generated code, a bit more. I saw this:
public partial class TheirWebservicePortTypeClient
:
System.ServiceModel.ClientBase<TheirWebservicePortType>,
TheirWebservicePortType
{
...
}
So I tried instantiating ChannelFactory<> with TheirWebservicePortType.
This gave me compile-time errors. The resulting proxy didn't have a ClientCredentials member, or a TheirOperation() method.
So I tried "System.ServiceModel.ClientBase".
Instantiation ChannelFactory<> with it still gave me compile-time errors. The resulting proxy did have a ClientCredentials member, but it still didn't have a TheirOperation() method.
So, what gives? How do I pass a username/password to an HTTPS webservice, from a WCF client?
==================== Edited to explain the solution ====================
First, as suggested, instantiation the factory with TheirWebservicePortType, adding the username and password to the factory.Credentials, instead of to proxy.ClientCredentials worked fine. Except for one bit of confusion.
Maybe it's something to do with the odd way the wsdl is written, but the client class, TheirWebservicePortTypeClient, defined TheirOperation as taking a Request argument and returning a Response result. The TheirWebservicePortType interface defined TheirOperation as taking a TheirOperation_Input argument and returning a TheirOperation_Output result, where TheirOperation_Input contained a Request member and TheirOperation_Output contained a Response member.
In any case, if I constructed a TheirOperation_Input object from the passed Request, the call to the proxy succeeded, and I could then extract the contained Response object from the returned TheirOperation_Output object:
TheirOperation_Output output = client.TheirOperation(new TheirOperation_Input(request));
TheirWebserviceResponse response = output.TheirWebserviceResponse;
You add the credentials to the ChannelFactory Credentials property

HttpWebRequest runs slowly first time within SQLCLR

When making an HttpWebRequest within a CLR stored procedure (as per the code below), the first invocation after the Sql Server is (re-)started or after a given (but indeterminate) period of time waits for quite a length of time on the GetResponse() method call.
Is there any way to resolve this that doesn't involve a "hack" such as having a Sql Server Agent job running every few minutes to try and ensure that the first "slow" call is made by the Agent and not "real" production code?
function SqlString MakeWebRequest(string address, string parameters, int connectTO)
{
SqlString returnData;
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(String.Concat(address.ToString(), "?", parameters.ToString()));
request.Timeout = (int)connectTO;
request.Method = "GET";
using (WebResponse response = request.GetResponse())
{
using (Stream responseStream = response.GetResponseStream())
{
using (StreamReader reader = new StreamReader(responseStream))
{
SqlString responseFromServer = reader.ReadToEnd();
returnData = responseFromServer;
}
}
}
response.Close();
return returnData;
}
(Error handling and other non-critical code has ben removed for brevity)
See also this Sql Server forums thread.
This was a problem for me using HttpWebRequest at first. It's due to the the class looking for a proxy to use. If you set the object's Proxy value to null/Nothing, it'll zip right along.
Looks to me like code signing verification. The MS shipped system dlls are all signed and SQL verifies the signatures at load time. Apparently the certificate revocation list is expired and the certificate verification engine times out retrieving a new list. I have blogged about this problem before Fix slow application startup due to code sign validation and the problem is also described in this Technet article: Certificate Revocation and Status Checking.
The solution is pretty arcane and involves registry editing of the key: HKLM\SOFTWARE\Microsoft\Cryptography\OID\EncodingType 0\CertDllCreateCertificateChainEngine\Config:
ChainUrlRetrievalTimeoutMilliseconds This is each individual CRL check call timeout. If is 0 or not present the default value of 15 seconds is used. Change this timeout to a reasonable value like 200 milliseconds.
ChainRevAccumulativeUrlRetrievalTimeoutMilliseconds This is the aggregate CRL retrieval timeout. If set to 0 or not present the default value of 20 seconds is used. Change this timeout to a value like 500 milliseconds.
There is also a more specific solution for Microsoft signed assemblies (this is from the Biztalk documentation, but applies to any assembly load):
Manually load Microsoft Certificate
Revocation lists
When starting a .NET application, the
.NET Framework will attempt to
download the Certificate Revocation
list (CRL) for any signed assembly. If
your system does not have direct
access to the Internet, or is
restricted from accessing the
Microsoft.com domain, this may delay
startup of BizTalk Server. To avoid
this delay at application startup, you
can use the following steps to
manually download and install the code
signing Certificate Revocation Lists
on your system.
Download the latest CRL updates from
http://crl.microsoft.com/pki/crl/products/CodeSignPCA.crl
and
http://crl.microsoft.com/pki/crl/products/CodeSignPCA2.crl.
Move the CodeSignPCA.crl and CodeSignPCA2.crl files to the isolated
system.
From a command prompt, enter the following command to use the certutil
utility to update the local
certificate store with the CRL
downloaded in step 1:
certutil –addstore CA c:\CodeSignPCA.crl
The CRL files are updated regularly,
so you should consider setting a
reoccurring task of downloading and
installing the CRL updates. To view
the next update time, double-click the
.crl file and view the value of the
Next Update field.
Not sure but if the delay long enough that initial DNS lookups could be the culprit?
( how long is the delay verse a normal call? )
and/or
Is this URI internal to the Network / or a different internal network?
I have seen some weird networking delays from using load balance profiles inside a network that isn't setup right, the firewalls, load-balancers, and other network profiles might be "fighting" the initial connections...
I am not a great networking guy, but you might want to see what an SA has to say about this on serverfault.com as well...
good luck
There is always a delay the first time SQLCLR loads the necessary assemblies.
That should be the case not only for your function MakeWebRequest, but also for any .NET function in the SQLCLR.
HttpWebRequest is part of the System.Net assembly, which is not part of the supported libraries.
I'd recommend using the library System.Web.Services instead to make web service calls from inside the SQLCLR.
I have tested and my first cold run (after SQL service restart) was in 3 seconds (not 30 as yours), all others are in 0 sec.
The code sample I've used to build a DLL:
using System;
using System.Data;
using System.Net;
using System.IO;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
namespace MySQLCLR
{
public static class WebRequests
{
public static void MakeWebRequest(string address, string parameters, int connectTO)
{
string returnData;
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(String.Concat(address.ToString(), "?", parameters.ToString()));
request.Timeout = (int)connectTO;
request.Method = "GET";
using (WebResponse response = request.GetResponse())
{
using (Stream responseStream = response.GetResponseStream())
{
using (StreamReader reader = new StreamReader(responseStream))
{
returnData = reader.ReadToEnd();
reader.Close();
}
responseStream.Close();
}
response.Close();
}
SqlDataRecord rec = new SqlDataRecord(new SqlMetaData[] { new SqlMetaData("response", SqlDbType.NVarChar, 10000000) });
rec.SetValue(0, returnData);
SqlContext.Pipe.Send(rec);
}
}
}