How to configure spring-cloud-gateway to use sleuth to log request/response body - spring-cloud-sleuth

I am looking to develop a gateway server based on spring-cloud-gateway:2.0.2-RELEASE and would like to utilize sleuth for logging purposes. I have sleuth running since when I write to the log I see the Sleuth details (span Id, etc), but I an hoping to see the body of messages being logged automatically. Is there something I need to do to get Sleuth to log request/response out of the box with Spring-Cloud-Gateway?
Here is the request headers that arrive at my downstream service
headers:
{ 'x-request-foo': '2a9c5e36-2c0f-4ad3-926c-cb20d4428462',
forwarded: 'proto=http;host=localhost;for="0:0:0:0:0:0:0:1:51720"',
'x-forwarded-for': '0:0:0:0:0:0:0:1',
'x-forwarded-proto': 'http',
'x-forwarded-port': '80',
'x-forwarded-host': 'localhost',
'x-b3-traceid': '5bd33eb8050c7a32dfce6adfe68b06ca',
'x-b3-spanid': 'ba202a6d6f3e2893',
'x-b3-parentspanid': 'dfce6adfe68b06ca',
'x-b3-sampled': '0',
host: 'localhost:8080' },
Gradle file in the gateway service..
buildscript {
ext {
kotlinVersion = '1.2.61'
springBootVersion = '2.0.6.RELEASE'
springCloudVersion = 'Finchley.RELEASE'
}
}
dependencyManagement {
imports {
mavenBom "org.springframework.cloud:spring-cloud-sleuth:2.0.2.RELEASE"
mavenBom 'org.springframework.cloud:spring-cloud-gateway:2.0.2.RELEASE'
mavenBom "org.springframework.cloud:spring-cloud-dependencies:${springCloudVersion}"
}
}
dependencies {
implementation('org.springframework.cloud:spring-cloud-starter-sleuth')
implementation('org.springframework.cloud:spring-cloud-starter-gateway')
implementation("org.jetbrains.kotlin:kotlin-stdlib-jdk8")
implementation("org.jetbrains.kotlin:kotlin-reflect")
testImplementation('org.springframework.boot:spring-boot-starter-test')
}
and finally the application.yml file for the gateway service...
server:
servlet:
contextPath: /
port: 80
spring:
application:
name: api.gateway.ben.com
sleuth:
trace-id128: true
sampler:
probability: 1.0
cloud:
gateway:
routes:
- id: admin-ui-2
predicates:
- Path=/admin-ui-2/echo/*
filters:
- SetPath=/fred
- AddRequestHeader=X-Request-Foo, 2a9c5e36-2c0f-4ad3-926c-cb20d4428462
- AddResponseHeader=X-Response-Foo, Bar
uri: http://localhost:8080
logging:
pattern:
level: "[%X{X-B3-TraceId}/%X{X-B3-SpanId}] %-5p [%t] %C{2} - %m%n"
level:
org.springframework.web: DEBUG

Spring Cloud Gateway already can log the request and response, you just need to change your log level to TRACE.
logging:
level:
org.springframework: TRACE
or to be more precise to just log req/resp:
logging:
level:
org.springframework.core.codec.StringDecoder: TRACE
Other option is to use a Filter in Spring Cloud Gateway and log req/resp etc with any loggers like Log4j:
routeBuilder.route(id,
r -> {
return r.path(path).and().method(requestmethod).and()
.header(routmap.getRequestheaderkey(), routmap.getRequestheadervalue()).and()
.readBody(String.class, requestBody -> {
return true;
}).filters(f -> {
f.rewritePath(rewritepathregex, replacement);
f.prefixPath(perfixpath);
f.filter(LogFilter);
return f;
}).uri(uri);
});
This filer "LogFilter" you need to define which would have logging logic.

Related

Infinispan java.lang.SecurityException: ISPN006017: Unauthorized 'PUT' operation

I am trying to put a value in Infinispan cache using Hotrod nodeJS client. The code runs fine if the server is installed locally. However, when I run the same code with Infinispan server hosted on docker container I get the following error
java.lang.SecurityException: ISPN006017: Unauthorized 'PUT' operation
try {
client = await infinispan.client({
port: 11222,
host: '127.0.0.1'
}, {
cacheName: 'testcache'
});
console.log(`Connected to cache`);
await client.put('test', 'hello 1');
await client.disconnect();
} catch (e) {
console.log(e);
await client.disconnect();
}
I have tried setting CORS Allow all option on the server as well
Need to provide custom config.yaml to docker with following configurations
endpoints:
hotrod:
auth: false
enabled: false
qop: auth
serverName: infinispan
Unfortunately the nodejs client doesn't support authentication yet. The issue to implement this is https://issues.redhat.com/projects/HRJS/issues/HRJS-36

"Execution failed" when setting up API Gateway and Fargate with AWS CDK

I am trying to setup AWS API Gateway to access a fargate container in a private VPC as described here. For this I am using AWS CDK as described below. But when I curl the endpoint after successful cdk deploy I get "Internal Server Error" as a response. I can't find any additional information. For some reason API GW can't reach the container.
So when I curl the endpoint like this:
curl - i https://xxx.execute-api.eu-central-1.amazonaws.com/prod/MyResource
... I get the following log output in cloud watch:
Extended Request Id: NpuEPFWHliAFm_w=
Verifying Usage Plan for request: 757c6b9e-c4af-4dab-a5b1-542b15a1ba21. API Key: API Stage: ...
PI Key authorized because method 'ANY /MyResource/{proxy+}' does not require API Key. Request will not contribute to throttle or quota limits
Usage Plan check succeeded for API Key and API Stage ...
Starting execution for request: 757c6b9e-c4af-4dab-a5b1-542b15a1ba21
HTTP Method: GET, Resource Path: /MyResource/test
Execution failed due to configuration error: There was an internal error while executing your request
CDK Code
First I create a network load balanced fargate service:
private setupService(): NetworkLoadBalancedFargateService {
const vpc = new Vpc(this, 'MyVpc');
const cluster = new Cluster(this, 'MyCluster', {
vpc: vpc,
});
cluster.connections.allowFromAnyIpv4(Port.tcp(5050));
const taskDefinition = new FargateTaskDefinition(this, 'MyTaskDefinition');
const container = taskDefinition.addContainer('MyContainer', {
image: ContainerImage.fromRegistry('vad1mo/hello-world-rest'),
});
container.addPortMappings({
containerPort: 5050,
hostPort: 5050,
});
const service = new NetworkLoadBalancedFargateService(this, 'MyFargateServie', {
cluster,
taskDefinition,
assignPublicIp: true,
});
service.service.connections.allowFromAnyIpv4(Port.tcp(5050));
return service;
}
Next I create the VpcLink and the API Gateway:
private setupApiGw(service: NetworkLoadBalancedFargateService) {
const api = new RestApi(this, `MyApi`, {
restApiName: `MyApi`,
deployOptions: {
loggingLevel: MethodLoggingLevel.INFO,
},
});
// setup api resource which forwards to container
const resource = api.root.addResource('MyResource');
resource.addProxy({
anyMethod: true,
defaultIntegration: new HttpIntegration('http://localhost.com:5050', {
httpMethod: 'ANY',
options: {
connectionType: ConnectionType.VPC_LINK,
vpcLink: new VpcLink(this, 'MyVpcLink', {
targets: [service.loadBalancer],
vpcLinkName: 'MyVpcLink',
}),
},
proxy: true,
}),
defaultMethodOptions: {
authorizationType: AuthorizationType.NONE,
},
});
resource.addMethod('ANY');
this.addCorsOptions(resource);
}
Anyone has a clue what is wrong with this config?
After hours of trying I finally figured out that the security groups do not seem to be updated correctly when setting up the VpcLink with CDK. Broadening the allowed connection with
service.service.connections.allowFromAnyIpv4(Port.allTraffic())
solved it. Still need to figure out which minimum set needs to be set instead of allTrafic()
Additionally I replaced localhost in the HttpIntegration by the endpoint of the load balancer like this:
resource.addMethod("ANY", new HttpIntegration(
'http://' + service.loadBalancer.loadBalancerDnsName,
{
httpMethod: 'ANY',
options: {
connectionType: ConnectionType.VPC_LINK,
vpcLink: new VpcLink(this, 'MyVpcLink', {
targets: [service.loadBalancer],
vpcLinkName: 'MyVpcLink',
})
},
}
))

Karate: How to test multipart form-data endpoint? [duplicate]

This question already has an answer here:
How to upload CSV file as a post request in Karate 0.9.0 version?
(1 answer)
Closed 2 years ago.
I have an file upload endpoint (/document) in a controller defined as follows:
#RestController
public class FileUploadController {
#Autowired
private PersonCSVReaderService personCSVReaderService;
#PostMapping(value = "/document", consumes= {MediaType.MULTIPART_FORM_DATA_VALUE})
public void handleFileUpload3(#RequestPart("file") MultipartFile file, #RequestPart("metadata") DocumentMetadata metadata) {
System.out.println(String.format("uploading file %s of %s bytes", file.getOriginalFilename(), file.getSize()));
personCSVReaderService.readPersonCSV(file, metadata);
}
}
I can test this endpoint using Advanced Rest Client (ARC) or Postman by defining the "file" part referencing the people.csv file and a text part specifying some sample metadata JSON.
Everything works fine and I get a 200 status back with the people.csv file contents being written to the console output by the service method:
uploading file people.csv of 256 bytes
{Address=1, City=2, Date of Birth=6, Name=0, Phone Number=5, State=3, Zipcode=4}
Person{name='John Brown', address='123 Main St.', city='Scottsdale', state='AZ', zipcode='85259', phoneNumber='555-1212', dateOfBirth='1965-01-01'}
Person{name='Jan Black', address='456 University Dr.', city='Atlanta', state='GA', zipcode='30306', phoneNumber='800-1111', dateOfBirth='1971-02-02'}
Person{name='Mary White', address='789 Possum Rd.', city='Nashville', state='TN', zipcode='37204', phoneNumber='888-2222', dateOfBirth='1980-03-03'}
Now, I want to run this as an automated Karate test. I have specified a MockConfig :
#Configuration
#EnableAutoConfiguration
#PropertySource("classpath:application.properties")
public class MockConfig {
// Services ...
#Bean
public PersonCSVReaderService personCSVReaderService() {
return new PersonCSVReaderService();
}
// Controllers ...
#Bean
public FileUploadController fileUploadController() {
return new FileUploadController();
}
}
I also have a MockSpringMvcServlet in the classpath and my karate-config.js is :
function fn() {
var env = karate.env; // get system property 'karate.env'
if (!env) {
env = 'dev';
}
karate.log('karate.env system property was:', env);
var config = {
env: env,
myVarName: 'someValue',
baseUrl: 'http://localhost:8080'
}
if (env == 'dev') {
var Factory = Java.type('MockSpringMvcServlet');
karate.configure('httpClientInstance', Factory.getMock());
//var result = karate.callSingle('classpath:demo/headers/common-noheaders.feature', config);
} else if (env == 'stg') {
// customize
} else if (env == 'prod') {
// customize
}
return config;
}
Other karate tests run ok using the mock servlet.
However, when I try this test to test the /document endpoint:
Feature: file upload end-point
Background:
* url baseUrl
* configure lowerCaseResponseHeaders = true
Scenario: upload file
Given path '/document'
And header Content-Type = 'multipart/form-data'
And multipart file file = { read: 'people.csv', filename: 'people.csv', contentType: 'text/csv' }
And multipart field metadata = { name: "joe", description: "stuff" }
When method post
Then status 200
I get this error:
16:14:42.674 [main] INFO com.intuit.karate - karate.env system property was: dev
16:14:42.718 [main] INFO o.s.mock.web.MockServletContext - Initializing Spring DispatcherServlet ''
16:14:42.719 [main] INFO o.s.web.servlet.DispatcherServlet - Initializing Servlet ''
16:14:43.668 [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration' of type [org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration$$EnhancerBySpringCGLIB$$a4c7d08f] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
16:14:43.910 [main] INFO o.h.validator.internal.util.Version - HV000001: Hibernate Validator 6.0.14.Final
16:14:44.483 [main] INFO o.s.s.c.ThreadPoolTaskExecutor - Initializing ExecutorService 'applicationTaskExecutor'
16:14:44.968 [main] INFO o.s.b.a.e.web.EndpointLinksResolver - Exposing 2 endpoint(s) beneath base path '/actuator'
16:14:45.006 [main] INFO o.s.web.servlet.DispatcherServlet - Completed initialization in 2287 ms
16:14:45.066 [main] INFO c.i.k.mock.servlet.MockHttpClient - making mock http client request: POST - http://localhost:8080/document
16:14:45.085 [main] DEBUG c.i.k.mock.servlet.MockHttpClient -
1 > POST http://localhost:8080/document
1 > Content-Type: multipart/form-data
16:14:45.095 [main] ERROR com.intuit.karate - http request failed: null
file-upload.feature:13 - null
HTML report: (paste into browser to view) | Karate version: 0.9.2
I can only assume that the arguments did not conform to what my endpoint was expecting - I never entered the endpoint in debug mode.
I tried this:
And multipart file file = read('people.csv')
And multipart field metadata = { name: "joe", description: "stuff" }
But that was a non-starter as well.
What am I doing wrong? The people.csv is in the same folder as fileupload.feature, so I am assuming it is finding the file. I also looked at upload.feature file in the Karate demo project given here:
Karate demo project upload.feature
But I could not make it work. Any help appreciated. Thanks in advance.
The Postman request looks like this:
EDIT UPDATE:
I could not get the suggested answer to work.
Here is the feature file:
Feature: file upload
Background:
* url baseUrl
* configure lowerCaseResponseHeaders = true
Scenario: upload file
Given path '/document'
And header Content-Type = 'multipart/form-data'
* def temp = karate.readAsString('people.csv')
* print temp
And multipart file file = { value: '#(temp)', filename: 'people.csv', contentType: 'text/csv' }
And multipart field metadata = { value: {name: 'joe', description: 'stuff'}, contentType: 'application/json' }
When method post
Then status 200
And here is the console output from running that test:
09:27:22.051 [main] INFO com.intuit.karate - found scenario at line: 7 - ^upload file$
09:27:22.156 [main] INFO com.intuit.karate - karate.env system property was: dev
09:27:22.190 [main] INFO o.s.mock.web.MockServletContext - Initializing Spring DispatcherServlet ''
09:27:22.190 [main] INFO o.s.web.servlet.DispatcherServlet - Initializing Servlet ''
09:27:23.084 [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration' of type [org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration$$EnhancerBySpringCGLIB$$a4c7d08f] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
09:27:23.327 [main] INFO o.h.validator.internal.util.Version - HV000001: Hibernate Validator 6.0.14.Final
09:27:23.896 [main] INFO o.s.s.c.ThreadPoolTaskExecutor - Initializing ExecutorService 'applicationTaskExecutor'
09:27:24.381 [main] INFO o.s.b.a.e.web.EndpointLinksResolver - Exposing 2 endpoint(s) beneath base path '/actuator'
09:27:24.418 [main] INFO o.s.web.servlet.DispatcherServlet - Completed initialization in 2228 ms
09:27:24.435 [main] INFO com.intuit.karate - [print] Name,Address,City,State,Zipcode,Phone Number,Date of Birth
John Brown,123 Main St.,Scottsdale,AZ,85259,555-1212,1965-01-01
Jan Black,456 University Dr.,Atlanta,GA,30306,800-1111,1971-02-02
Mary White,789 Possum Rd.,Nashville,TN,37204,888-2222,1980-03-03
09:27:24.482 [main] INFO c.i.k.mock.servlet.MockHttpClient - making mock http client request: POST - http://localhost:8080/document
09:27:24.500 [main] DEBUG c.i.k.mock.servlet.MockHttpClient -
1 > POST http://localhost:8080/document
1 > Content-Type: multipart/form-data
09:27:24.510 [main] ERROR com.intuit.karate - http request failed: null
file-upload.feature:14 - null
HTML report: (paste into browser to view) | Karate version: 0.9.2
Note: people.csv file reads successfully and prints in console.
Refer to this part of the docs: https://github.com/intuit/karate#read-file-as-string
So make this change:
* def temp = karate.readAsString('people.csv')
And multipart file file = { value: '#(temp)', filename: 'people.csv', contentType: 'text/csv' }
EDIT: my bad, also refer: https://github.com/intuit/karate#multipart-file
Feature: upload csv
Background: And def admin = read('classpath:com/project/data/adminLogin.json')
Scenario:
Given url baseUrl
And header Authorization = admin.token
And multipart file residentDetails = { read:'classpath:com/project/data/ResidentApp_Details.csv', filename: 'ResidentApp_Details.csv' }
When method POST
Then status 200
Note: Add only one extra line i.e And multipart file residentDetails = { read:'classpath:com/project/data/ResidentApp_Details.csv', filename: 'ResidentApp_Details.csv' }

Unable to configure and run pithos.io using AWS Java SDK

I am trying to configure pithos.io on my server testmbr1.kabuter.com:8081:
Here is how I start pithos.io:
java -jar pithos-0.7.5-standalone.jar -f pithos.yaml
My pithos.yaml:
service:
host: "0.0.0.0"
port: 8081
logging:
level: info
console: true
overrides:
io.pithos: debug
options:
service-uri: testmbr1.kabuter.com
default-region: myregion
keystore:
keys:
AKIAIOSFODNN7EXAMPLE:
master: true
tenant: test#example.com
secret: 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
bucketstore:
default-region: myregion
cluster: "45.33.37.148"
keyspace: storage
regions:
myregion:
metastore:
cluster: "45.33.37.148"
keyspace: storage
storage-classes:
standard:
cluster: "45.33.37.148"
keyspace: storage
max-chunk: "128k"
max-block-chunk: 1024
cassandra:
saved_caches_directory: "target/db/saved_caches"
data_file_directories:
- "target/db/data"
commitlog_directory: "target/db/commitlog"
I am using AWS Java SDK to connect. Below is my JUnit:
#Test
public void testPithosIO() {
try {
ClientConfiguration config = new ClientConfiguration();
config.setSignerOverride("S3SignerType");
EndpointConfiguration endpointConfiguration = new EndpointConfiguration("http://testmbr1.kabuter.com:8081",
"myregion");
BasicAWSCredentials awsCreds = new BasicAWSCredentials("AKIAIOSFODNN7EXAMPLE",
"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY");
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion("myregion")
.withClientConfiguration(config)
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.withEndpointConfiguration(endpointConfiguration).build();
s3Client.createBucket("mybucket1");
System.out.println(s3Client.getRegionName());
System.out.println(s3Client.listBuckets());
} catch (Exception e) {
e.printStackTrace();
}
}
My problems is 1) I am getting: com.amazonaws.SdkClientException: Unable to execute HTTP request: Connect to mybucket1.testmbr1.kabuter.com:8081 [mybucket1.testmbr1.kabuter.com/198.105.254.130, mybucket1.testmbr1.kabuter.com/104.239.207.44] failed: connect timed out
This was fixed by adding mybucket1.testmbr1 CNAME record pointing to testmbr1.kabuter.com.
2) while trying to createBucket: s3Client.createBucket("mybucket1") I am getting:
com.amazonaws.services.s3.model.AmazonS3Exception: The request signature we calculated does not match the signature you provided. Check your key and signing method. (Service: Amazon S3; Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: d98b7908-d11e-458a-be27-254b136f344a), S3 Extended Request ID: d98b7908-d11e-458a-be27-254b136f344a
How do I get it to working? pithos.io seems to have limited documentation.
Any pointers?
Since my endpoint was using a non-standard port:
http://testmbr1.kabuter.com:8081
I had to define server-uri in pithos.yaml with the port as well:
server-uri : testmbr1.kabuter.com:8081

Spring Cloud Config (Vault backend) teminating too early

I am using Spring Cloud Config Server to serve configuration for my client apps. To facilitate secrets configuration I am using HashiCorp Vault as a back end. For the remainder of the configuration I am using a GIT repo. So I have configured the config server in composite mode. See my config server bootstrap.yml below:-
server:
port: 8888
spring:
profiles:
active: local, git, vault
application:
name: my-domain-configuration-server
cloud:
config:
server:
git:
uri: https://mygit/my-domain-configuration
order: 1
vault:
order: 2
host: vault.mydomain.com
port: 8200
scheme: https
backend: mymount/generic
This is all working as expected. However, the token I am using is secured with a Vault auth policy. See below:-
{
"rules": "path "mymount/generic/myapp-app,local" {
policy = "read"
}
path "mymount/generic/myapp-app,local/*" {
policy = "read"
}
path "mymount/generic/myapp-app" {
policy = "read"
}
path "mymount/generic/myapp-app/*" {
policy = "read"
}
path "mymount/generic/application,local" {
policy = "read"
}
path "mymount/generic/application,local/*" {
policy = "read"
}
path "mymount/generic/application" {
policy = "read"
}
path "mymount/generic/application/*" {
policy = "read"
}"
}
My issue is that I am not storing secrets in all these scopes. I need to specify all these paths just so I can authorize the token to read one secret from mymount/generic/myapp-app,local. If I do not authorize all the other paths the VaultEnvironmentRepository.read() method returns a 403 HTTP status code (Forbidden) and throws a VaultException. This results in complete failure to retrieve any configuration for the app, including GIT based configuration. This is very limiting as client apps may have multiple Spring profiles that have nothing to do with retrieving configuration items. The issue is that config server will attempt to retrieve configuration for all the active profiles provided by the client.
Is there a way to enable fault tolerance or lenience on the config server, so that VaultEnvironmentRepository does not abort and returns any configuration that it is actually authorized to return?
Do you absolutely need the local profile? Would you not be able to get by with just the 'vault' and 'git' profiles in Config Server and use the 'default' profile in each Spring Boot application?
If you use the above suggestion then the only two paths you'd need in your rules (.hcl) file are:
path "mymount/generic/application" {
capabilities = ["read", "list"]
}
and
path "mymount/generic/myapp-app" {
capabilities = ["read", "list"]
}
This assumes that you're writing configuration to
vault write mymount/generic/myapp-app
and not
vault write mymount/generic/myapp-app,local
or similar.