Rate limit policy settings is not applied using AspNetCoreRateLimit - api

I am using AspNetCoreRateLimit version 4.0.1 and I have done all the setup in .net core 6 web api. I can see rate limit is working when I send a call via postman.
However, when I add IpRateLimitPolicies with specific IP address, the settings won't be applied.
I use postman and this time in the proxy I added the ip address to 127.0.0.1. I can see the ip hitting the api is set correctly when I use Request.HttpContext.Connection.RemoteIpAddress;
I registered them as follow in program.cs:
_serviceCollection.AddOptions();
_serviceCollection.AddMemoryCache();
_serviceCollection.Configure<IpRateLimitOptions>(builder.Configuration.GetSection("IpRateLimiting"));
_serviceCollection.Configure<IpRateLimitPolicies>(builder.Configuration.GetSection("IpRateLimitPolicies"));
_serviceCollection.AddInMemoryRateLimiting();
_serviceCollection.AddSingleton<IIpPolicyStore, MemoryCacheIpPolicyStore>();
_serviceCollection.AddSingleton<IRateLimitCounterStore, MemoryCacheRateLimitCounterStore>();
_serviceCollection.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();
_serviceCollection.AddSingleton<IRateLimitConfiguration, RateLimitConfiguration>();
Also added:
app.UseIpRateLimiting();
My appsettings also looks like:
{
"IpRateLimiting": {
"EnableEndpointRateLimiting": false,
"StackBlockedRequests": false,
"RealIPHeader": "X-Real-IP",
"ClientIdHeader": "X-ClientId",
"IpWhitelist": [ ],
"EndpointWhitelist": [],
"ClientWhitelist": [],
"HttpStatusCode": 429,
"GeneralRules": [
{
"Endpoint": "*",
"Period": "10s",
"Limit": 1
}
]
},
"IpRateLimitPolicies": {
"IpRules": [
{
"Ip": "127.0.0.1",
"Rules": [
{
"Endpoint": "*",
"Period": "20s",
"Limit": 2
}
]
}
]
}
But apparently the settings under IpRateLimitPolicies won't be applied.
I wonder if I have missed anything here?
Thank you

After testing, I think the AspNetCoreRateLimit package is not compatible in .net5 and .net6. Maybe the .net core3.1 version will be more stable.
You can submit issues on github.

Apparently I have missed some configuration in program.cs or startup.cs:
https://github.com/stefanprodan/AspNetCoreRateLimit/issues/305
Since we are using startup I have added the following in Configure method
var ipPolicyStore = app.ApplicationServices.GetRequiredService();
ipPolicyStore.SeedAsync().GetAwaiter().GetResult();

Related

How can i custom config CHANGELOG.md using standard-version npm package?

I'm using the command standard-version each time I want to publish new version, but the yielded changes in the CHANGELOG.md look like this:
### [10.1.9](https://github.com/my-project-name/compare/v10.1.8...v10.1.9) (2021-03-29)
### [10.1.8](https://github.com/my-project-name/compare/v10.1.7...v10.1.8) (2021-03-29)
### [10.1.7](https://github.com/my-project-name/compare/v10.1.6...v10.1.7) (2021-03-29)
first the links do not work - the github url is not correct and i want to configure it to the right url, and second, I'd like to configure the link that's shown in the changeslog file (there are some types)
I tried to use this documentation but didn't find anything that can help me
https://github.com/conventional-changelog/conventional-changelog
so how do I configure the way standard-version works on the CHANGELOG.md ? can someone provide example?
yes.
according to doc:
You can configure standard-version either by:
Placing a standard-version stanza in your package.json (assuming your project is JavaScript).
Creating a .versionrc, .versionrc.json or .versionrc.js.
If you are using a .versionrc.js your default export must be a configuration object, or a function returning a configuration object.
Any of the command line parameters accepted by standard-version can instead be provided via configuration.
Please refer to the conventional-changelog-config-spec for details on available configuration options.
example:
.versionrc
{
"types": [
{
"type": "feat",
"section": "Features"
},
{
"type": "fix",
"section": "Bug Fixes"
},
{
"type": "chore",
"hidden": true
},
{
"type": "docs",
"hidden": true
},
{
"type": "style",
"hidden": true
},
{
"type": "refactor",
"section": "Refactor"
},
{
"type": "perf",
"section": "Performance"
},
{
"type": "test",
"hidden": true
}
]
}

Ocelot microgateway hosted in IIS

Hi I was trying to implement ocelot for our experimental tests on dev.
Here is end-point of api that I want to reach by via ocelot. using 443 port for both of project.
but getting 502 bad gateway all the time.
end point => https://localhost/document/api/v1/Documents/XYZ
"ReRoutes": [
{
"DownstreamPathTemplate": "/document/api/v1/Documents/{name}",
"DownstreamScheme": "https",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": 443
}
],
"UpstreamPathTemplate": "/apigateway/{name}/document",
"UpstreamHttpMethod": [ "Post" ],
"Priority": 0
}
],
"GlobalConfiguration": {
"BaseUrl": "https://localhost:443"
}
}
Microgateway alias name =>"apigateway"
Api alias name => "document"
In addition this I was able to debug on visiual studio but whenever I host both app on my local IIS getting 502 bad gateway
It appears that the configuration you have used is redirecting the request to the gateway itself, resulting in a circle.
i.e. the upstream call to the base URL of "localhost:443" is redirecting to the downstream "localhost:443" - the same.
Furthermore, the later versions of Ocelot appear to look for Routes in the configuration instead of ReRoutes documentation

How do I make APIManagement Service Logger deploy before the Application insights Resource?

I am trying to make the following ARM deploy a APIM service logger, however the service logger starts to deploy before the app insights resource and fails, the app insights resource is in a seperate template. I have added a dependson statement and thought that would do the job but that did'nt work either. Also the code below actually works if the app insights is already deployed.
does anyone have any pointers?
{
"type": "Microsoft.ApiManagement/service/loggers",
"name": "[concat(variables('apiManagementInstanceName'), '/', parameters('appInsightsName'))]",
"apiVersion": "2018-01-01",
"properties": {
"loggerType": "applicationInsights",
"description": "Logger resources to APIM",
"credentials": {
"instrumentationKey": "[reference(resourceId('Microsoft.Insights/components', parameters('appInsightsName')), '2015-05-01').InstrumentationKey]"
}
}
"dependsOn": [
"[resourceId('microsoft.insights/components', parameters('appInsightsName'))]"
]
}
also tried depending on both the APIM and app insights
"dependsOn": [
//"[resourceId('Microsoft.ApiManagement/service', variables('apiManagementInstanceName'))]"
"[resourceId('microsoft.insights/components', parameters('appInsightsName'))]"
],
You can use linked templates to reference another template file and define dependencies on it: https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/linked-templates#linked-template

Can't Connect to Service via Marathon-lb using DCOS

I recently went through the tutorial for load balancing apps in DCOS using marathon-lb (in the example they balance some nginx containers: https://dcos.io/docs/1.9/networking/marathon-lb/marathon-lb-advanced-tutorial/). I am trying to use this approach to internally load balance my own custom application. The custom app I am using is a play scala app. I have the internal marathon-lb set up and can successfully use it for the nginx container but when I try to use my own docker image I cannot get this to work. I start up my service with my custom image and I can access the service fine by using the IP and port that gets assigned to it (i.e. if the service gets deployed on 10.0.0.0 and is available on port 1234 then curl http://10.0.0.0:1234/ works as expected and I can also make my api calls as defined in my application routes). However, when I try to access the app through the load balancer (curl -i http://marathon-lb-internal.marathon.mesos:10002, where 10002 is the service port) then I get this message:
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
For reference, here is my json file I'm using to start my custom service:
{
"id": "my-app",
"container": {
"type": "DOCKER",
"docker": {
"image": "my_repo/my_image:1.0.0",
"network": "BRIDGE",
"portMappings": [
{ "hostPort": 0, "containerPort": 9000, "servicePort": 10002, "protocol": "tcp" }
],
"parameters": [
{ "key": "env", "value": "USER_NAME=user" },
{ "key": "env", "value": "USER_PASSWORD=password" }
],
"forcePullImage": true
}
},
"instances": 1,
"cpus": 1,
"mem": 1000,
"healthChecks": [{
"protocol": "HTTP",
"path": "/v1/health",
"portIndex": 0,
"timeoutSeconds": 10,
"gracePeriodSeconds": 10,
"intervalSeconds": 2,
"maxConsecutiveFailures": 10
}],
"labels":{
"HAPROXY_GROUP":"internal"
},
"uris": [ "https://s3.amazonaws.com/my_bucket/my_docker_credentials" ]
}
I had the same problem and found the solution here
marathon-lb health check failing on all spray.io containers
Need to add
"HAPROXY_0_BACKEND_HTTP_HEALTHCHECK_OPTIONS": " http-send-name-header Host\n timeout check {healthCheckTimeoutSeconds}s\n"
To your config so that the REST layer doesn't bark on the health check from marathon

List of all environment variables for a Pod

I have a web app on OpenShift v3 (all-in-One), using the Wildfly Builder Image. In addition, I created a service named xyz, to point to an external host+IP. Something like this:
"kind": "Service",
"apiVersion": "v1",
"metadata": { "name": "xyz" },
"spec": {
"ports": [
{ "port": 61616,
"protocol": "TCP",
"targetPort": 61616
}
],
"selector": {}
}
I also have an endpoint, pointing externally, but that is not relevant for this question.
When deployed, my program can access an environment variable named XYZ_PORT=tcp://172.30.192.186:61616
However, I cannot figure out how to see all the values of all such variables either via the web-console, or using the CLI. Using the web-console, I cannot see it being injected into the YAML.
I tried some of the oc env options, but none seem to list what I want.
Let's say you are deploying kitchensink, then the below CLI should list all the environment variables:
oc env bc/kitchensink --list