Can not use redis cache for apq/query_planning in apollo router 1.6.0 - redis

I have been trying to use the experimental_cache feature, specifically the external caching with redis.
In the documentation (Caching in the Apollo Router - Apollo GraphQL Docs) it states that “it can be tested by building a custom Router binary, with the Cargo feature experimental_cache”.
I have taken this to mean that it should be added to Cargo.toml like this:
[features]
experimental_cache = []
With configuration like this:
supergraph:
apq:
experimental_cache:
in_memory:
limit: 512
redis:
urls: ["redis://..."]
query_planning:
experimental_cache:
in_memory:
limit: 512
redis:
urls: ["redis://..."]
However, doing this gives me the error:
ERROR configuration had errors:
1. /supergraph/apq/experimental_cache
supergraph:
apq:
experimental_cache:
┌ in_memory:
| limit: 512
| redis:
| urls: ["redis://..."]
└-----> Additional properties are not allowed ('redis' was unexpected)
2. /supergraph/query_planning/experimental_cache
limit: 512
redis:
urls: ["redis://..."]
query_planning:
experimental_cache:
┌ in_memory:
| limit: 512
| redis:
| urls: ["redis://..."]
└-----> Additional properties are not allowed ('redis' was unexpected)
2023-01-19T14:03:27.064894Z ERROR no valid configuration was supplied
Error: no valid configuration was supplied
I’m pretty sure I need additional configuration in order to use the redis part (the experimental_cache works fine without it), but it is not described in the documentation.
If someone could point me in the right direction it would be greatly appreciated.
Router version is 1.6.0

Related

Spring Cloud Gateway Request Rate Limiter is not working with Redis Cluster

I am trying to add redis request rate limiter to a gateway project. Redis cluster is already up with 6 nodes in docker. But it seems the redis request rate limiter not working in the gateway project.
Here is the config
spring:
redis:
cluster:
nodes: ${REDIS_CLUSTER_NODES}
maxRedirects: ${REDIS_CLUSTER_MAX_REDIRECTS}
...
filters:
- name: RequestRateLimiter
args:
key-resolver: "#{#userRemoteAddressResolver}"
redis-rate-limiter.replenishRate: 1
redis-rate-limiter.burstCapacity: 2
redis-rate-limiter.requestedTokens: 1
There is no error message and no 429 HttpStatus in responses. Does RequestRateLimiter not work with Redis-Cluster? Am i missing something? Thanks in advance

Swagger file security scheme defined but not in use

I have a Swagger 2.0 file that has an auth mechanism defined but am getting errors that tell me that we aren't using it. The exact error message is “Security scheme was defined but never used”.
How do I make sure my endpoints are protected using the authentication I created? I have tried a bunch of different things but nothing seems to work.
I am not sure if the actual security scheme is defined, I think it is because we are using it in production.
I would really love to have some help with this as I am worried that our competitor might use this to their advantage and steal some of our data.
swagger: "2.0"
# basic info is basic
info:
version: 1.0.0
title: Das ERP
# host config info
# Added by API Auto Mocking Plugin
host: virtserver.swaggerhub.com
basePath: /rossja/whatchamacallit/1.0.0
#host: whatchamacallit.lebonboncroissant.com
#basePath: /v1
# always be schemin'
schemes:
- https
# we believe in security!
securityDefinitions:
api_key:
type: apiKey
name: api_key
in: header
description: API Key
# a maze of twisty passages all alike
paths:
/dt/invoicestatuses:
get:
tags:
- invoice
summary: Returns a list of invoice statuses
produces:
- application/json
operationId: listInvoiceStatuses
responses:
200:
description: OK
schema:
type: object
properties:
code:
type: integer
value:
type: string
securityDefinitions alone is not enough, this section defines available security schemes but does not apply them.
To actually apply a security scheme to your API, you need to add security requirements on the root level or to individual operations.
security:
- api_key: []
See the API Keys guide for details.

How to configure Varnish in an API-platform project? [Response size limit issue]

Sometimes in my preproduction and production environment, the varnish container send me this error:
Error (null) Backend fetch failed
Backend fetch failed
Guru Meditation:
XID: (null)
This is due to the size of the body response.
So I did implement this test in my Postman test collection:
pm.test("Size is under 3Ko", function () {
pm.expect(pm.response.responseSize).to.be.below(3000);
});
To be sure that this error does not not appear again.
But I am wondering how can I configure it properly to accept a reasonable size of response?
This my configuration:
Api Platform 2.5.1
VCL 4.0
Varnish documentation states that the default maximum size of an HTTP response is 32 KB.
You can tune this by setting the http_resp_size runtime parameter.
Here's an example of an increased http_resp_size value:
varnishd -p http_resp_size=1M
If that doesn't help, please share the varnishlog output for that specific page, as well as the associated VCL code.
If you're unsure whether or not your http_resp_size was set to the correct value, you can run the following command on your Varnish server:
$ varnishadm param.show http_resp_size
Hope this helps.

serverless-api-gateway-caching plugin is not setting the cache size

I try to set the AWS API Gateway cache using the serverless-api-gateway-caching plugin.
All is working fine, except the cacheSize.
This is my configuration for the caching:
caching:
enabled: true
clusterSize: '13.5'
ttlInSeconds: 3600
cacheKeyParameters:
- name: request.path.param1
- name: request.querystring.param2
The cache is configured correctly, but the cache size is always the default one '0.5'
Any idea about what is wrong?
sls -v
1.42.3
node --version
v9.11.2
serverless-api-gateway-caching: 1.4.0
Regards
Because of "Cache Capacity" setting is global per stage, it is not possible to set it per endpoint.
So the plugin is going to check this parameter only in the servelerless global configuration, ignoring it at the endpoint level.
It means that the right configuration is:
custom:
apiGatewayCaching:
enabled: true
clusterSize: '13.5'

Pattern matching for profile in Spring Cloud Config Server

Context
I am attempting to separate configuration information for our applications using the pattern-matching feature in Spring Cloud Config Server. I have created a repo for "production" environment with a property file floof.yaml. I have created a repo for "development" environment with a property file floof-dev.yaml.
My server config:
spring:
application:
name: "bluemoon"
cloud:
config:
server:
git:
uri: file://${user.home}/tmp/prod
repos:
development:
pattern:
- \*/dev
uri: file://${user.home}/tmp/dev
After starting the server instance, I can successfully retrieve the config content using curl, and can verify which content was served by referring to the "source" element as well as the values for the properties themselves.
Expected Behavior
When I fetch http://localhost:8080/floof/prod I expect to see the source "$HOME/tmp/prod/floof.yaml" and the values from that source, and the actual results match that expectation.
When I fetch http://localhost:8080/floof/dev I expect to see the source "$HOME/tmp/dev/floof-dev.yaml" and the values from that source, but the actual result is the "production" file and contents (the same as if I had fetched .../floof/prod instead.
My Questions
Is my expectation of behavior incorrect? I assume not since there is an example in the documentation in the "Git backend" section that suggests separation by profile is a thing.
Is my server config incorrectly specifying the "development" repo? I turned up the logging verbosity in the server instance and saw nothing in there that called attention to itself in terms of misconfiguration.
Are the property files subject to a naming convention that I'm not following?
I had the same issue. Here is how I resolved::
spring cloud config pattern match for profile
Also, check if you are using Brixton.M5 version.
After some debugging on the PatternMatching source code here is how I resolved the issue: You can choose one of the two ways.
application.yml
server:
port: 8888
spring:
cloud:
config:
server:
git:
uri: ssh://xxxx#github/sample/cloud-config-properties.git
repos:
development:
pattern: '*/development' ## give in quotes
uri: ssh://git#xxxgithub.com/development.git
OR
development:
pattern: xx*/development,*/development ##since it is not allowed to have a value starting with a wildcard( '*' )after pattern I first gave a generic matching but the second value is */development. Since pattern takes multiple values, the second pattern will match with the profile.
uri: ssh://git#xxxgithub.com/development.git
pattern: */development.Error on yml file- expected alphabetic or numeric character, but found but found /.
The reason the profile pattern git repo was not identified because : although spring allows multiple array values for pattern beginning with a '-' in the yml file, the pattern matcher was taking the '-' as string to be matched. i.e it is looking for a pattern '-*/development' instead of '*/development'.
repos:
development:
pattern:
-*/development
-*/staging
Another issue i observed was, I was getting a compilation error on yml file if i had to mention the pattern array as '- */development' - note space after hyphen(which is meant to show that it can hold multiple values as array) and start with a '*/development' with an error: expected alphabetic or numeric character, but found but found /
repos:
development:
pattern:
- */development
- */staging