Hi I was trying to implement ocelot for our experimental tests on dev.
Here is end-point of api that I want to reach by via ocelot. using 443 port for both of project.
but getting 502 bad gateway all the time.
end point => https://localhost/document/api/v1/Documents/XYZ
"ReRoutes": [
{
"DownstreamPathTemplate": "/document/api/v1/Documents/{name}",
"DownstreamScheme": "https",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": 443
}
],
"UpstreamPathTemplate": "/apigateway/{name}/document",
"UpstreamHttpMethod": [ "Post" ],
"Priority": 0
}
],
"GlobalConfiguration": {
"BaseUrl": "https://localhost:443"
}
}
Microgateway alias name =>"apigateway"
Api alias name => "document"
In addition this I was able to debug on visiual studio but whenever I host both app on my local IIS getting 502 bad gateway
It appears that the configuration you have used is redirecting the request to the gateway itself, resulting in a circle.
i.e. the upstream call to the base URL of "localhost:443" is redirecting to the downstream "localhost:443" - the same.
Furthermore, the later versions of Ocelot appear to look for Routes in the configuration instead of ReRoutes documentation
Related
I am using AspNetCoreRateLimit version 4.0.1 and I have done all the setup in .net core 6 web api. I can see rate limit is working when I send a call via postman.
However, when I add IpRateLimitPolicies with specific IP address, the settings won't be applied.
I use postman and this time in the proxy I added the ip address to 127.0.0.1. I can see the ip hitting the api is set correctly when I use Request.HttpContext.Connection.RemoteIpAddress;
I registered them as follow in program.cs:
_serviceCollection.AddOptions();
_serviceCollection.AddMemoryCache();
_serviceCollection.Configure<IpRateLimitOptions>(builder.Configuration.GetSection("IpRateLimiting"));
_serviceCollection.Configure<IpRateLimitPolicies>(builder.Configuration.GetSection("IpRateLimitPolicies"));
_serviceCollection.AddInMemoryRateLimiting();
_serviceCollection.AddSingleton<IIpPolicyStore, MemoryCacheIpPolicyStore>();
_serviceCollection.AddSingleton<IRateLimitCounterStore, MemoryCacheRateLimitCounterStore>();
_serviceCollection.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();
_serviceCollection.AddSingleton<IRateLimitConfiguration, RateLimitConfiguration>();
Also added:
app.UseIpRateLimiting();
My appsettings also looks like:
{
"IpRateLimiting": {
"EnableEndpointRateLimiting": false,
"StackBlockedRequests": false,
"RealIPHeader": "X-Real-IP",
"ClientIdHeader": "X-ClientId",
"IpWhitelist": [ ],
"EndpointWhitelist": [],
"ClientWhitelist": [],
"HttpStatusCode": 429,
"GeneralRules": [
{
"Endpoint": "*",
"Period": "10s",
"Limit": 1
}
]
},
"IpRateLimitPolicies": {
"IpRules": [
{
"Ip": "127.0.0.1",
"Rules": [
{
"Endpoint": "*",
"Period": "20s",
"Limit": 2
}
]
}
]
}
But apparently the settings under IpRateLimitPolicies won't be applied.
I wonder if I have missed anything here?
Thank you
After testing, I think the AspNetCoreRateLimit package is not compatible in .net5 and .net6. Maybe the .net core3.1 version will be more stable.
You can submit issues on github.
Apparently I have missed some configuration in program.cs or startup.cs:
https://github.com/stefanprodan/AspNetCoreRateLimit/issues/305
Since we are using startup I have added the following in Configure method
var ipPolicyStore = app.ApplicationServices.GetRequiredService();
ipPolicyStore.SeedAsync().GetAwaiter().GetResult();
I have an ASP.NET Core (3.1) application which is self-hosted and running as a service. I would like to expose an HTTPS endpoint for it. On the same machine there is an IIS instaled with already configured https together with certificate:
The certificate seems to stored in local computer certificate store:
I can also list it via the powershell:
> get-childitem cert:\LocalMachine\My\ | format-table NotAfter, Subject
NotAfter Subject
-------- -------
27.10.2023 07:38:45 <irrelevant>
08.03.2022 09:52:44 CN=a7642e58-2cdf-4e9b-a277-60fad84d7c64, DC=3336d6b0-b132-47ee-a49b-3ab470a5336e
23.02.2022 21:51:53 CN=a7642e58-2cdf-4e9b-a277-60fad84d7c64, DC=3336d6b0-b132-47ee-a49b-3ab470a5336e
27.10.2031 06:48:06 CN=a7642e58-2cdf-4e9b-a277-60fad84d7c64
26.10.2024 10:41:03 E=****.com, CN=****, OU=IT, O=****, L=****, S=***, C=**
I changed the appsettings.json to use the certificate from the store:
{
"Logging": {
"LogLevel": {
"Default": "Debug",
"System": "Information",
"Microsoft": "Warning"
}
},
"AllowedHosts": "*",
"Kestrel": {
"EndPoints": {
"Http": {
"Url": "http://*:5000"
},
"HttpsDefaultCert": {
"Url": "https://*:5001"
}
},
"Certificates": {
"Default": {
"Subject": "E=****.com, CN=****, OU=IT, O=****, L=****, S=***, C=**",
"Store": "My",
"Location": "LocalMachine",
"AllowInvalid": "true"
}
}
}
}
However this does not seem to work. I always get the following error:
System.InvalidOperationException: The requested certificate E=****.com, CN=****, OU=IT, O=****, L=****, S=***, C=** could not be found in LocalMachine/My with AllowInvalid setting: True
I do not know what could be the problem. The only thing that I think might be problematic is that the certificate subject actually contains newlines in the subject:
I do not know if this is the problem and I do not know how to enter it in the appsettings.json as multiline values can not be entered.
I've managed to track down the issue. Kestrel uses FindBySubjectName when searching for certificate.
FindBySubjectName does a sub-string search and will not match the full Subject of the certificate. If your certificate subject is something like 'CN=my-certificate' then searching for 'CN=my-certificate' will not find anything. Searching only for 'my-certificate' will work.
Additional note: In addition to specifying the correct search expression, make sure that the account under which you are running the application has sufficient permissions to read the certificate from certificate store. Certificates do have ACL so you do not have to run your app as an administrator.
I refer to the documentation for configuring the SSL certificates for Asp.NetCore app running on Kestrel.
I noticed some URL and ports settings also get stored in Properties/LaunchSettings.json file.
See Here: Configure endpoints for the ASP.NET Core Kestrel web server
Further, I noticed that you have put the Certificate under Defaults. I found other ways to configure the certificate. You could try to test them.
In the following appsettings.json example:
Set AllowInvalid to true to permit the use of invalid certificates (for example, self-signed certificates).
Any HTTPS endpoint that doesn't specify a certificate (HttpsDefaultCert in the example that follows) falls back to the cert defined under Certificates:Default or the development certificate.
{
"Kestrel": {
"Endpoints": {
"Http": {
"Url": "http://localhost:5000"
},
"HttpsInlineCertFile": {
"Url": "https://localhost:5001",
"Certificate": {
"Path": "<path to .pfx file>",
"Password": "$CREDENTIAL_PLACEHOLDER$"
}
},
"HttpsInlineCertAndKeyFile": {
"Url": "https://localhost:5002",
"Certificate": {
"Path": "<path to .pem/.crt file>",
"KeyPath": "<path to .key file>",
"Password": "$CREDENTIAL_PLACEHOLDER$"
}
},
"HttpsInlineCertStore": {
"Url": "https://localhost:5003",
"Certificate": {
"Subject": "<subject; required>",
"Store": "<certificate store; required>",
"Location": "<location; defaults to CurrentUser>",
"AllowInvalid": "<true or false; defaults to false>"
}
},
"HttpsDefaultCert": {
"Url": "https://localhost:5004"
}
},
"Certificates": {
"Default": {
"Path": "<path to .pfx file>",
"Password": "$CREDENTIAL_PLACEHOLDER$"
}
}
}
}
Schema notes:
Endpoints names are case-insensitive. For example, HTTPS and Https are equivalent.
The Url parameter is required for each endpoint. The format for this parameter is the same as the top-level Urls configuration parameter except that it's limited to a single value.
These endpoints replace those defined in the top-level Urls configuration rather than adding to them. Endpoints defined in code via Listen are cumulative with the endpoints defined in the configuration section.
The Certificate section is optional. If the Certificate section isn't specified, the defaults defined in Certificates:Default are used. If no defaults are available, the development certificate is used. If there are no defaults and the development certificate isn't present, the server throws an exception and fails to start.
The Certificate section supports multiple certificate sources.
Any number of endpoints may be defined in Configuration as long as they don't cause port conflicts.
Reference: Replace the default certificate from configuration
I am using Ocelot - API gateway for .NET Core
https://github.com/ThreeMammals/Ocelot
Scenario:
I have the following sites
Ocelot API Gateway .NET Core
Site A - Angular app
Site B - .net core API
Site C - .net core API
Now what I want is all request should first reach ocelot from there it will redirect to respective app and APIs
Request first goes to Ocelot from there routing should take place as mentioned below
/ - route to the angular app (Site A )
/b - route to API (Site B)
/c - route to API (Site C)
I am able to route to /b and /c to respective API and app. Just need to know is ocelot suitable for routing to App like I have used Angular here or it is designed for routing apis in microservices. What are its pros and cons if angular app is used
I'v did something similar, following your example, it should be some like this:
Ocelot API Gateway .NET Core ----> (http://entrypoint.com)
Site A - Angular app ----> (http://localhost:5001/)
Site B - .net core API ----> (http://localhost:5002/b)
Site C - .net core API ----> (http://localhost:5003/c)
Configuracion in ocelot project:
{
"ReRoutes": [
{
"DownstreamPathTemplate": "/b/{everything}",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": 5002
}
],
"UpstreamPathTemplate": "/b/{everything}",
"UpstreamHttpMethod": [ "Get","Post" ]
},
{
"DownstreamPathTemplate": "/c/{everything}",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": 5003
}
],
"UpstreamPathTemplate": "/c/{everything}",
"UpstreamHttpMethod": [ "Get","Post" ]
},
{
"DownstreamPathTemplate": "/{everything}",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": 5001
}
],
"UpstreamPathTemplate": "/{everything}",
"UpstreamHttpMethod": [ "Get","Post" ]
}
],
"GlobalConfiguration": {
"BaseUrl": "https://entrypointurl.com"
}
}
Ref:
Ocelot Getting Started
https://ocelot.readthedocs.io/en/latest/introduction/gettingstarted.html
Ocelot Routing
https://ocelot.readthedocs.io/en/latest/features/routing.html
I recently went through the tutorial for load balancing apps in DCOS using marathon-lb (in the example they balance some nginx containers: https://dcos.io/docs/1.9/networking/marathon-lb/marathon-lb-advanced-tutorial/). I am trying to use this approach to internally load balance my own custom application. The custom app I am using is a play scala app. I have the internal marathon-lb set up and can successfully use it for the nginx container but when I try to use my own docker image I cannot get this to work. I start up my service with my custom image and I can access the service fine by using the IP and port that gets assigned to it (i.e. if the service gets deployed on 10.0.0.0 and is available on port 1234 then curl http://10.0.0.0:1234/ works as expected and I can also make my api calls as defined in my application routes). However, when I try to access the app through the load balancer (curl -i http://marathon-lb-internal.marathon.mesos:10002, where 10002 is the service port) then I get this message:
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
For reference, here is my json file I'm using to start my custom service:
{
"id": "my-app",
"container": {
"type": "DOCKER",
"docker": {
"image": "my_repo/my_image:1.0.0",
"network": "BRIDGE",
"portMappings": [
{ "hostPort": 0, "containerPort": 9000, "servicePort": 10002, "protocol": "tcp" }
],
"parameters": [
{ "key": "env", "value": "USER_NAME=user" },
{ "key": "env", "value": "USER_PASSWORD=password" }
],
"forcePullImage": true
}
},
"instances": 1,
"cpus": 1,
"mem": 1000,
"healthChecks": [{
"protocol": "HTTP",
"path": "/v1/health",
"portIndex": 0,
"timeoutSeconds": 10,
"gracePeriodSeconds": 10,
"intervalSeconds": 2,
"maxConsecutiveFailures": 10
}],
"labels":{
"HAPROXY_GROUP":"internal"
},
"uris": [ "https://s3.amazonaws.com/my_bucket/my_docker_credentials" ]
}
I had the same problem and found the solution here
marathon-lb health check failing on all spray.io containers
Need to add
"HAPROXY_0_BACKEND_HTTP_HEALTHCHECK_OPTIONS": " http-send-name-header Host\n timeout check {healthCheckTimeoutSeconds}s\n"
To your config so that the REST layer doesn't bark on the health check from marathon
Can’t find any resources that simply say here’s where your cert goes and here’s how to enable it. I have the cert there when I run gcloud compute ssl-certificates list. I have a cluster with kubernetes running and exposing http traffic via this service:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "foo-frontend-service"
},
"spec": {
"selector": {
"app": "foo-frontend-rc"
},
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 3009
}
]
}
}
Need to know how to put the cert in the right place to be utilized
Need to know how to reconfigure my service
Need to know what my new SSL endpoint will be. Is it the same?
K8s doesn't have special TLS support for the ordinary services. You need to use one of the following methods:
using Ingress: see http://kubernetes.io/docs/user-guide/ingress/#tls. You need to choose a Ingress controller which implements the Ingress functionalities, you can use GLBC if you are on GCE, or you can use the nginx one. Both of them supports TLS. Please note that the Ingress is still beta feature with limitations.
The service-loadbalancer in the contrib repo also supports tls: https://github.com/kubernetes/contrib/tree/master/service-loadbalancer#ssl-termination