How to record ocsp application using with jmeter - testing

I need to record ocsp based application in jmeter.
Any suggestions.

Just like any other application, as per Wikipedia
Messages communicated via OCSP are encoded in ASN.1 and are usually communicated over HTTP.
so theoretically you can record the network flow using HTTP(S) Test Script Recorder.
However you won't be able to replay it because OCSP requests require some complex calculations. Example code to generate and sign the request:
import org.bouncycastle.cert.ocsp.CertificateID;
import org.bouncycastle.cert.ocsp.OCSPReq;
import org.bouncycastle.cert.ocsp.OCSPReqBuilder;
import org.bouncycastle.asn1.*;
import org.bouncycastle.openssl.*;
import org.bouncycastle.openssl.PEMParser;
import org.bouncycastle.util.io.pem.*;
import org.bouncycastle.pkcs.*;
import org.bouncycastle.operator.DigestCalculatorProvider;
import org.bouncycastle.operator.jcajce.JcaDigestCalculatorProviderBuilder;
String BC = vars.get('"securityProvider');
String fName = vars.get("certpath");
Reader fR = new BufferedReader(new FileReader(fName));
PEMParser pPar = new PEMParser(fR);
X509CertificateHolder obj = (X509CertificateHolder)pPar.readObject();
DigestCalculatorProvider dCP = new JcaDigestCalculatorProviderBuilder().setProvider(BC).build();
CertificateID cId = new CertificateID(dCP.get(CertificateID.HASH_SHA1), obj, obj.getSerialNumber());
OCSPReqBuilder oRB = new OCSPReqBuilder();
oRB.addRequest(cId);
OCSPReq oReq = oRB.build();
byte[] asn1seq = oReq.getEncoded();
String sb = new String(asn1seq);
sampler.getArguments().getArgument(0).setValue(sb);
check out How to Test OCSP article for more details.

Related

How to Pass the Numerical expression in query Param for Rest API automation

I am trying to automate the open source mathjs api which is having the url as "https://api.mathjs.org/v4/?expr=2%2F3&precision=3" . Below is my code
import java.util.TreeMap;
import org.junit.Test;
import io.restassured.RestAssured;
import io.restassured.response.Response;
import io.restassured.specification.RequestSpecification;
public class Mathjs2 {
#Test
public void getResponseBody() {
RestAssured.baseURI = "https://api.mathjs.org/v4/";
RequestSpecification httpRequest = RestAssured.given().relaxedHTTPSValidation();
TreeMap<String, String> temp = new TreeMap<String, String>();
temp.put("expr", "2%2F3");
temp.put("precision", "3");
httpRequest.queryParams(temp);
Response response = httpRequest.log().all().get();
System.out.println(response.getStatusCode());
}
}
When I have executed the code Iam getting 400 error code while in postman
it is showing 200 code. Below is the console log showing the desired url is mismatch
Request method: GET
Request URI: https://api.mathjs.org/v4/?expr=2%252F3&precision=3
Required Url - https://api.mathjs.org/v4/?expr=2%2F3&precision=3
Generated Url -https://api.mathjs.org/v4/?expr=2%252F3&precision=3
Don't know why the 52 is coming in query param ?expr=2%2F3 Please help on providing and explaining solution
Because rest-assured provides urlencode out of the box. You just need tell rest-assured "no urlencode for this one", by using .urlEncodingEnabled(false)
RequestSpecification httpRequest = RestAssured.given().relaxedHTTPSValidation().urlEncodingEnabled(false);
Result:
Reference: https://github.com/rest-assured/rest-assured/wiki/Usage#url-encoding

can't get testcontainers Kafka with sasl.jaas.config to test ACLs to work

I'm trying to leverage testcontainers to test Kafka locally in some automated unit tests. I'm having trouble testing authorization.
My goal is to test
(1) if there are no ACLs in this test container that no KafkaProducer should be allowed to write to it (currently, even with no ACLs created as long as a producer is configured correctly, it can send to the topic - I thought setting the kafka env variable of allow.everyone.if.no.acl.found to false would do the trick - but doesn't seem to be the case)
(2) to test if the KafkaProducer is not using the correct sasl.jaas.config (i.e incorrect apiKey and pasword) that it gets denied access to write to the test topic, even if an ACL is setup for all principals.
Below is my code. I can get it to "work" but testing the above two scenarios I haven't been able to figure out. I think I might not be actually creating ACLs, as when I add a line after I create the ACLs (adminClient.describeAcls(AclBindingFilter.ANY).values().get(); I get a No Authorizer is configured on the broker error) -> looking at posts similar to this I thought this implies that no ACL binding had actually been created.
import org.testcontainers.containers.KafkaContainer;
import org.testcontainers.containers.Network;
import org.testcontainers.utility.DockerImageName;
import java.util.ArrayList;
import java.util.List;
import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.KafkaAdminClient;
import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.acl.AccessControlEntry;
import org.apache.kafka.common.acl.AclBinding;
import org.apache.kafka.common.acl.AclOperation;
import org.apache.kafka.common.acl.AclPermissionType;
import org.apache.kafka.common.resource.PatternType;
import org.apache.kafka.common.resource.ResourcePattern;
import org.apache.kafka.common.resource.ResourceType;
import org.apache.kafka.common.serialization.StringSerializer;
String topicName = "this-is-a-topic";
String confluentVersion = "5.5.1";
network = Network.newNetwork();
String jaasTemplate = "org.apache.kafka.common.security.plain.PlainLoginModule required %s=\"%s\" %s=\"%s\";";
String jaasConfig = String.format(jaasTemplate, "username", "apiKey", "password", "apiPassword");
kafka = new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:" + confluentVersion))
.withNetwork(network)
.withEnv("KAFKA_AUTO_CREATE_TOPICS_ENABLE", "false")
.withEnv("KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND", "false")
.withEnv("KAFKA_SUPER_USERS", "User:OnlySuperUser")
.withEnv("KAFKA_SASL_MECHANISM", "PLAIN")
.withEnv("KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM", "http")
.withEnv("KAFKA_SASL_JAAS_CONFIG", jaasConfig);
kafka.start();
schemaRegistryContainer = new SchemaRegistryContainer(confluentVersion).withKafka(kafka);
schemaRegistryContainer.start();
Properties properties = new Properties();
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafka.getBootstrapServers());
properties.put("input.topic.name", topicName);
properties.put("input.topic.partitions", "1");
adminClient = KafkaAdminClient.create(properties);
AclBinding ACL = new AclBinding(new ResourcePattern(ResourceType.TOPIC, topicName, PatternType.LITERAL),
new AccessControlEntry( "User:*", "*", AclOperation.WRITE, AclPermissionType.ALLOW));
var acls = adminClient.createAcls(List.of(ACL)).values();
List<NewTopic> topics = new ArrayList<>();
topics.add(
new NewTopic(topicName,
Integer.parseInt(properties.getProperty("input.topic.partitions")),
Short.parseShort(properties.getProperty("input.topic.replication.factor")))
);
adminClient.createTopics(topics);
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put("input.topic.name", topicName);
props.put("security.protocol", "PLAINTEXT");
props.put("input.topic.partitions", "1");
props.put("input.topic.replication.factor", "1");
props.put("metadata.fetch.timeout.ms", "10000");
props.put("sasl.jaas.config", jaasConfig);
producer = new KafkaProducer<>(props);
String key = "testContainers";
String value = "AreAwesome";
ProducerRecord<String, String> record = new ProducerRecord<>(
(String) props.get("input.topic.name"), key, value);
try {
RecordMetadata o = (RecordMetadata) producer.send(record).get();
System.out.println(o.toString());
} catch (Exception e) {
e.printStackTrace();
}

Use of RabbitTemplate.convertSendAndReceive with org.springframework.messaging.Message

I have successfully used the following to send an org.springframework.amqp.core.Message and receive a byte []
import org.springframework.amqp.core.Message;
import org.springframework.amqp.core.MessageBuilder;
import org.springframework.amqp.rabbit.core.RabbitTemplate;
Message message =
MessageBuilder.withBody(payload)..setCorrelationIdString(id).build();
byte [] response = (byte[]) rabbitTemplate.convertSendAndReceive(message,m -> {
m.getMessageProperties().setCorrelationIdString(id);
This works fine if the queues are set up to handle the message correctly for Message<?>. But I have a series of queues that use the message type org.springframework.messaging.Message specifically Message<String>.
Is there a way I can use rabbitTemplate.convertSendAndReceive to send the org.springframework.messaging.Message Message< String>. Such that the following would work.
import org.springframework.messaging.Message;
import org.springframework.integration.support.MessageBuilder;
import org.springframework.amqp.rabbit.core.RabbitTemplate;
Message<String> message =
MessageBuilder.withPayload(payload).setCorrelationId(id).build();
Object returnObject = rabbitTemplate.convertSendAndReceive(message);
I have looked at the MessageConverter but I am unsure if I can use that.
Alternatively, should I use org.springframework.messaging.core.GenericMessagingTemplate.convertSendAndReceive
UPDATE.
I can make it work if I change what I have on the queues from
#Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
public Message<String> transform(Message<String> inMessage) {
to
#Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
public Message<String> transform(Message<?> inMessage) { GenericMessage<?>
genericMessage = (GenericMessage<?>)inMessage.getPayload();
String payload = (String)genericMessage.getPayload();
but I would rather not have to change the transformers to make this work as the code in question is for integration tests and existing code already works with what I already have.
END UPDATE
I think I have given enough information but please let me know if more details are required. Ideally, I am looking for a code example or to point me to the documentation that answers my question.
Use the RabbitMessagingTemplate documentation here.
public Message<?> sendAndReceive(String exchange, String routingKey, Message<?> requestMessage)

Zapi API - Getting error Expecting claim 'qsh' to have value

I just try to fetch general information from zapi api, but getting error
Expecting claim 'qsh' to have value '7f0d00c2c77e4af27f336c87906459429d1074bd6eaabb81249e1042d4b84374' but instead it has the value '1c9e9df281a969f497d78c7636abd8a20b33531a960e5bd92da0c725e9175de9'
API LINK : https://prod-api.zephyr4jiracloud.com/connect/public/rest/api/1.0/config/generalinformation
can anyone help me please.
The query string parameters must be sorted in alphabetical order, this will resolve the issue.
Please see this link for reference:
https://developer.atlassian.com/cloud/bitbucket/query-string-hash/
I can definitely help you with this. You need to generate the JWT token in the right way.
package com.thed.zephyr.cloud.rest.client.impl;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.net.URI;
import java.net.URISyntaxException;
import org.apache.http.HttpEntity;
import org.apache.http.HttpResponse;
import org.apache.http.ParseException;
import org.apache.http.client.ClientProtocolException;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.entity.StringEntity;
import org.apache.http.impl.client.DefaultHttpClient;
import org.apache.http.util.EntityUtils;
import com.thed.zephyr.cloud.rest.ZFJCloudRestClient;
import com.thed.zephyr.cloud.rest.client.JwtGenerator;
public class JWTGenerator {
public static void main(String[] args) throws URISyntaxException, IllegalStateException, IOException {
String zephyrBaseUrl = "https://prod-api.zephyr4jiracloud.com/connect";
String accessKey = "TYPE YOUR ACCESS KEY-GET IT FROM ZEPHYR";
String secretKey = "TYPE YOUR SECRET KEY-GET IT FROM ZEPHYR";
String userName = "TYPE YOUR USER - GET IT FROM ZEPHYR/JIRA";
ZFJCloudRestClient client = ZFJCloudRestClient.restBuilder(zephyrBaseUrl, accessKey, secretKey, userName).build();
JwtGenerator jwtGenerator = client.getJwtGenerator();
String createCycleUri = zephyrBaseUrl + "/public/rest/api/1.0/cycles/search?versionId=<TYPE YOUR VERSION ID HERE>&projectId=<TYPE YOUR PROJECT ID HERE>";
URI uri = new URI(createCycleUri);
int expirationInSec = 360;
String jwt = jwtGenerator.generateJWT("GET", uri, expirationInSec);
//String jwt = jwtGenerator.generateJWT("PUT", uri, expirationInSec);
//String jwt = jwtGenerator.generateJWT("POST", uri, expirationInSec);
System.out.println("FINAL API : " +uri.toString());
System.out.println("JWT Token : " +jwt);
}
}
Also clone this repository: https://github.com/zephyrdeveloper/zfjcloud-rest-api which will give you all methods where respective encodings are there. You can build a Maven project to have these dependencies directly imported.
*I also spent multiple days to figure it out, so be patient and it's only the time till you generate right JWT.

Multipart Upload Amazon S3

I'm trying to upload a file on Amazon S3 using their APIs. I tried using their sample code and it creates various parts of files. Now, the problem is, how do I pause the upload and then resume it ? See the following code as given on their documentation:
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.AbortMultipartUploadRequest;
import com.amazonaws.services.s3.model.CompleteMultipartUploadRequest;
import com.amazonaws.services.s3.model.InitiateMultipartUploadRequest;
import com.amazonaws.services.s3.model.InitiateMultipartUploadResult;
import com.amazonaws.services.s3.model.PartETag;
import com.amazonaws.services.s3.model.UploadPartRequest;
public class UploadObjectMPULowLevelAPI {
public static void main(String[] args) throws IOException {
String existingBucketName = "*** Provide-Your-Existing-BucketName ***";
String keyName = "*** Provide-Key-Name ***";
String filePath = "*** Provide-File-Path ***";
AmazonS3 s3Client = new AmazonS3Client(new ProfileCredentialsProvider());
// Create a list of UploadPartResponse objects. You get one of these
// for each part upload.
List<PartETag> partETags = new ArrayList<PartETag>();
// Step 1: Initialize.
InitiateMultipartUploadRequest initRequest = new
InitiateMultipartUploadRequest(existingBucketName, keyName);
InitiateMultipartUploadResult initResponse =
s3Client.initiateMultipartUpload(initRequest);
File file = new File(filePath);
long contentLength = file.length();
long partSize = 5242880; // Set part size to 5 MB.
try {
// Step 2: Upload parts.
long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++) {
// Last part can be less than 5 MB. Adjust part size.
partSize = Math.min(partSize, (contentLength - filePosition));
// Create request to upload a part.
UploadPartRequest uploadRequest = new UploadPartRequest()
.withBucketName(existingBucketName).withKey(keyName)
.withUploadId(initResponse.getUploadId()).withPartNumber(i)
.withFileOffset(filePosition)
.withFile(file)
.withPartSize(partSize);
// Upload part and add response to our list.
partETags.add(
s3Client.uploadPart(uploadRequest).getPartETag());
filePosition += partSize;
}
// Step 3: Complete.
CompleteMultipartUploadRequest compRequest = new
CompleteMultipartUploadRequest(
existingBucketName,
keyName,
initResponse.getUploadId(),
partETags);
s3Client.completeMultipartUpload(compRequest);
}
catch (Exception e)
{
s3Client.abortMultipartUpload(new AbortMultipartUploadRequest(
existingBucketName, keyName, initResponse.getUploadId()));
}
}
}
I have also tried the TransferManager example which takes an Upload object and calls a tryPause(forceCancel) method. But the problem here is, it gets cancelled everytime I try and pause it.
My question is, how do I use the above code with pause and resume functionalities ? Also, just to note that I would also like to upload multiple files with same functionalities.... Help would be much appreciated.
I think you should use the Transfer Manager sample if you can. If it's being canceled, it's likely that it just isn't possible to pause it(with the given configuration of the TransferManager you are using).
This might be because you paused it too early to make "pausing" mean anything besides canceling, you are trying to use encryption, or the file isn't big enough. I believe the default minimum file size is 16MB. However, you can change the configuration of the TransferManager to allow you to pause depending on tryPause is failing, except in the case of encryption where I don't think there's anything you can do.
If you want to enable pause/resume for a file smaller than that size, you can call the setMultipartUploadThreshold(long) method in TransferManagerConfiguration. If you want to be able to pause earlier, you can use setMinimumUploadPartSize to set it to use smaller chunks.
In any case, I would advise you to use the TransferManager if possible, since it's made to do this kind of thing for you. It might be helpful to see why the transfer is not being paused when you use tryPause.
TransferManager performs the upload and download asynchronously and doesn't block the current thread. When you call the resumeUpload, TransferManager returns immediately with a reference to Upload. You can use this reference to enquire on the status of the upload.