WCF Client Configuration: how can I check if endpoint is in config file, and fallback to code if not? - wcf

Looking to make a Client that sends serialized Message objects back to a server via WCF.
To make things easy for the end-developer (different departments) would be best that they didn't need to know how to edit their config file to set up the client end point data.
That said, would also be brilliant that the endpoint wasn't embedded/hard-coded into the Client either.
A mix scenario would appear to me to be the easiest solution to roll out:
IF (described in config) USE config file ELSE fallback to hard-coded endpoint.
What I've found out is:
new Client(); fails if no config file definition found.
new Client(binding,endpoint); works
therefore:
Client client;
try {
client = new Client();
}catch {
//Guess not defined in config file...
//fall back to hard coded solution:
client(binding, endpoint)
}
But is there any way to check (other than try/catch) to see if config file has an endpoint declared?
Would the above not fail as well if defined in config file, but not configured right? Would be good to distinguish between the two conditions?

I would like to propose improved version of AlexDrenea solution, that uses only special types for configuration sections.
Configuration configuration = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);
ServiceModelSectionGroup serviceModelGroup = ServiceModelSectionGroup.GetSectionGroup(configuration);
if (serviceModelGroup != null)
{
ClientSection clientSection = serviceModelGroup.Client;
//make all your tests about the correcteness of the endpoints here
}

here is the way to read the configuration file and load the data into an easy to manage object:
Configuration c = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);
ConfigurationSectionGroup csg = c.GetSectionGroup("system.serviceModel");
if (csg != null)
{
ConfigurationSection css = csg.Sections["client"];
if (css != null && css is ClientSection)
{
ClientSection cs = (ClientSection)csg.Sections["client"];
//make all your tests about the correcteness of the endpoints here
}
}
The "cs" object will expose a collection named "endpoints" that allows you to access all the properties that you find in the config file.
Make sure you also treat the "else" branches of the "if"s and treat them as fail cases (configuration is invalid).

Related

Blazor - Circular references - serialization and deserialization default options

In a Blazor WebAssembly app I have one single server side method that returns results with circular references.
I found out that I can handle this situation on the server side by adding the following:
builder.Services.AddControllersWithViews()
.AddJsonOptions(options =>
{
options.JsonSerializerOptions.ReferenceHandler = ReferenceHandler.Preserve;
});
and on the client side:
var options = new JsonSerializerOptions() { ReferenceHandler = ReferenceHandler.Preserve };
var r = await _http.GetFromJsonAsync<MyObject>>($"api/mycontroller/mymethod", options);
Unfortunately this way reference handling is enabled for every method on server. This introduces "$id" keys in almost all my methods results.
This would force me to change every client call to introduce ReferenceHandler.Preserve option.
Is there a way to specify ReferenceHandler.Preserve for some methods only (server side) or alternatively an option to force ReferenceHandler.Preserve for every GetFromJsonAsync (client side)?
You can use custom middleware in your sever . In custom middleware , you can put the method in it and do judge the URL passed by blazor. If the url meets the requirements, execute the method in the middleware, If not ,Just ignore this method.

How to effectively set the ServerUrl in the NSwag settings?

I have a working ASP .NET 5 application with a REST API and a Swagger interface using the NSwag library (so not Swashbuckle as many people use instead). I have found (or at least I have thought so) a way to set the server url. There is a property to set this. Here is my code:
app.UseSwaggerUi3(settings =>
{
settings.ServerUrl = "https://XXXXX.YYYYY.com/ZZZZ";
settings.TransformToExternalPath = (url, request) =>
{
// Get the UI to properly find the relative path of the swagger json instead of absolute path.
string outputUrl;
if (url.EndsWith(".json") || request.Path.ToString().EndsWith("/")) outputUrl = ".." + url;
else outputUrl = request.PathBase + "." + url;
return outputUrl;
};
});
However, when running my application and using the swagger interface, the set server url is not used at all... It simply refers to same host as it does without setting it. I know it is possible to change the routing behavior of swagger when needed (as explained here and I also used this code) but that basically solves the problem of not being able to find the swagger json. However, I did not find a way to effectively set the server url. When I use the code shown in this post, the request url when executing a request refers to the same url when not using it.
How can I fix this? How to effectively set the server url? This is so strange. I looks like the property does not do anything at all.
For me it seems to work well using the Host property in PostProcess. I set it to null to get the UI working for all incoming host names.
I'm on Owin version but same looks like it should work in 5/6 as well.
RouteTable.Routes.MapOwinPath("swagger", app =>
{
..
app.UseSwaggerUi3(typeof(Global).Assembly, c => {
// Fix to make UI work regardless of hostname serving
c.PostProcess = document => {
document.Host = null;
};
});
});
The ServerUrl property looks like it's only used as RedirectUrl in an eventual OAuth2 flow.

S3A client and local S3 mock

To create end-to-end local tests of data workflow I utilize "mock S3" container (e.g adobe/S3Mock). Seems to work just fine. However, some parts of the system rely on S3A client. As far as I see, its format does not allow to point to particular nameserver or endpoint.
Is it possible to make S3A work in local environment?
you talking about the ASF Hadoop S3A Connector? Nobody has tested against S3 mock AFAIK (never seen it before!), but it does work with non-AWS endpoints
set fs.s3a.endpoint to the URL of your S3 connection. There's some settings about switching from https to http (fs.s3a.connection.ssl.enabled = false) and moving from virtual hosts to directories (fs.s3a.path.style.access = true) which will also be needed.
further reading
Like I said: nobody has done this. We developers just go against the main AWS endpoints with its problems (latency, inconsistency, error reporting, etc), precisely because its what you get in production. But for your local testing, it will simplify your life (and you can run it under jenkins without having to give it any secrets)
Answer by #stevel worked for me. Here is the code if someone wants to refer
class S3WriterTest {
private static S3Mock api;
private static AmazonS3 mockS3client;
#BeforeAll
public static void setUp() {
//start mock s3 service using findify
api = new S3Mock.Builder().withPort(8001).withInMemoryBackend().build();
api.start();
/* AWS S3 client setup.
* withPathStyleAccessEnabled(true) trick is required to overcome S3 default
* DNS-based bucket access scheme
* resulting in attempts to connect to addresses like "bucketname.localhost"
* which requires specific DNS setup.
*/
EndpointConfiguration endpoint = new EndpointConfiguration("http://localhost:8001", "us-west-2");
mockS3client = AmazonS3ClientBuilder
.standard()
.withEndpointConfiguration(endpoint)
.withPathStyleAccessEnabled(true)
.withCredentials(new AWSStaticCredentialsProvider(new AnonymousAWSCredentials()))
.build();
mockS3client.createBucket("test-bucket");
}
#AfterAll
public static void tearDown() {
api.shutdown();
}
#Test
void unitTestForHadoopCodeWritingUsingS3A {
Configuration hadoopConfig = getTestConfiguration();
........
}
private static Configuration getTestConfiguration() {
Configuration config = new Configuration();
config.set("fs.s3a.endpoint", "http://127.0.0.1:8001");
config.set("fs.s3a.connection.ssl.enabled", "false");
config.set("fs.s3a.path.style.access", "true");
config.set("fs.s3a.aws.credentials.provider", "org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider");
config.set("fs.s3a.access.key", "foo");
config.set("fs.s3a.secret.key", "bar");
return config;
}
}

In Ratpack, how can I configure loading configuration from an external file?

I have a Ratpack app written with the Groovy DSL. (Embedded in Java, so not a script.)
I want to load the server's SSL certificates from a config file supplied in the command line options. (The certs will directly embedded in the config, or possibly in a PEM file referenced somewhere in the config.)
For example:
java -jar httpd.jar /etc/app/sslConfig.yml
sslConfig.yml:
---
ssl:
privateKey: file:///etc/app/privateKey.pem
certChain: file:///etc/app/certChain.pem
I seem to have a chicken-and-egg problem using the serverConfig's facilities for reading the config file in order to configure the SslContext later in the serverConfig. The server config isn't created at the point I want to load the SslContext.
To illustrate, the DSL definition I have is something like this:
// SSL Config POJO definition
class SslConfig {
String privateKey
String certChain
SslContext build() { /* ... */ }
}
// ... other declarations here...
Path configPath = Paths.get(args[1]) // get this path from the CLI options
ratpack {
serverConfig {
yaml "/defaultConfig.yaml" // Defaults defined in this resource
yaml configPath // The user-supplied config file
env()
sysProps('genset-server')
require("/ssl", SslConfig) // Map the config to a POJO
ssl sslConfig // HOW DO I GET AN INSTANCE OF that SslConfig POJO HERE?
baseDir BaseDir.find()
}
handlers {
get { // ...
}
}
}
Possibly there is a solution to this (loading the SSL context in a later block?)
Or possibly just a better way to go about the whole thing..?
You could create a separate ConfigDataBuilder to load up a config object to deserialize your ssl config.
Alternatively, you can bind directly to server.ssl. All of the ServerConfig properties bind to the server space within the config.
The solution I am currently using is this, with an addition of a #builder method to SslConfig which returns a SslContextBuilder defined using its other fields.
ratpack {
serverConfig {
// Defaults defined in this resource
yaml RatpackEntryPoint.getResource("/defaultConfig.yaml")
// Optionally load the config path passed via the configFile parameter (if not null)
switch (configPath) {
case ~/.*[.]ya?ml/: yaml configPath; break
case ~/.*[.]json/: json configPath; break
case ~/.*[.]properties/: props configPath; break
}
env()
sysProps('genset-server')
require("/ssl", SslConfig) // Map the config to a POJO
baseDir BaseDir.find()
// This is the important change.
// It apparently needs to come last, because it prevents
// later config directives working without errors
ssl build().getAsConfigObject('/ssl',SslConfig).object.builder().build()
}
handlers {
get { // ...
}
}
}
Essentially this performs an extra build of the ServerConfig in order to redefine the input to the second build, but it works.

Apache Http Client Put Request Error

I'm trying to upload a file using the Apache Http Client's PUT method. The code is as below;
def putFile(resource: String, file: File): (Int, String) = {
val httpClient = new DefaultHttpClient(connManager)
httpClient.getCredentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(un, pw))
val url = address + "/" + resource
val put = new HttpPut(url)
put.setEntity(new FileEntity(file, "application/xml"))
executeHttp(httpClient, put) match {
case Success(answer) => (answer.getStatusLine.getStatusCode, "Successfully uploaded file")
case Failure(e) => {
e.printStackTrace()
(-1, e.getMessage)
}
}
}
When I tried running the method, I get to see the following error:
org.apache.http.NoHttpResponseException: The target server failed to respond
at org.apache.http.impl.conn.DefaultResponseParser.parseHead(DefaultResponseParser.java:101)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:252)
at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:281)
at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:247)
at org.apache.http.impl.conn.AbstractClientConnAdapter.receiveResponseHeader(AbstractClientConnAdapter.java:219)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:298)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:633)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:454)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:820)
I do not know what has gone wrong? I'm able to do GET requests, but PUT seems not to work! Any clues as to where I should look for?
Look on the server. If GET Works, but PUT does not, then you have to figure out the receiving end.
Also, you may want to write a simple HTML File that has a form with PUT Method in it to rule out your Java Part.
As a sidenode: Its technically possible that something in between stops the request from going through or the response reaching you. Best setup a dummy HTTP Server to do the testing against.
Maybe its also a timeout issue, so the server takes to long to process your PUT.
The connection you are trying to use is a stale connection and therefore the request is failing.
But why are you only seeing an error for the PUT request and you are not seeing it for the GET request?
If you check the DefaultHttpRequestRetryHandler class you will see that by default HttpClient attempts to automatically recover from I/O exceptions. The default auto-recovery mechanism is limited to just a few exceptions that are known to be safe.
HttpClient will make no attempt to recover from any logical or HTTP protocol errors (those derived from HttpException class).
HttpClient will automatically retry those methods that are assumed to be idempotent. Your GET request, but not your PUT request!!
HttpClient will automatically retry those methods that fail with a transport exception while the HTTP request is still being transmitted to the target server (i.e. the request has not been fully transmitted to the server).
This is why you don't notice any error with your GET request, because the retry mechanism handles it.
You should define a CustomHttpRequestRetryHandler extending the DefaultHttpRequestRetryHandler. Something like this:
public class CustomHttpRequestRetryHandler extends DefaultHttpRequestRetryHandler {
#Override
public boolean retryRequest(IOException exception, int executionCount, HttpContext context) {
if(exception instanceof NoHttpResponseException) {
return true;
}
return super.retryRequest(exception, executionCount, context);
}
}
Then just assign your CustomHttpRequestRetryHandler
final HttpClientBuilder httpClientBuilder = HttpClients.custom();
httpClientBuilder.setRetryHandler(new CustomHttpRequestRetryHandler());
And that's it, now your PUT request is handled by your new RetryHandler (like the GET was by the default one)